2025-05-03 00:00:15.723504 | Job console starting... 2025-05-03 00:00:15.747720 | Updating repositories 2025-05-03 00:00:15.978498 | Preparing job workspace 2025-05-03 00:00:17.465025 | Running Ansible setup... 2025-05-03 00:00:23.513010 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-03 00:00:24.857717 | 2025-05-03 00:00:24.857847 | PLAY [Base pre] 2025-05-03 00:00:24.960557 | 2025-05-03 00:00:24.960700 | TASK [Setup log path fact] 2025-05-03 00:00:24.993506 | orchestrator | ok 2025-05-03 00:00:25.045418 | 2025-05-03 00:00:25.045559 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-03 00:00:25.118902 | orchestrator | ok 2025-05-03 00:00:25.145402 | 2025-05-03 00:00:25.145514 | TASK [emit-job-header : Print job information] 2025-05-03 00:00:25.233675 | # Job Information 2025-05-03 00:00:25.233919 | Ansible Version: 2.15.3 2025-05-03 00:00:25.233954 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-03 00:00:25.233981 | Pipeline: periodic-midnight 2025-05-03 00:00:25.234001 | Executor: 7d211f194f6a 2025-05-03 00:00:25.234017 | Triggered by: https://github.com/osism/testbed 2025-05-03 00:00:25.234033 | Event ID: e9d1a3e0d6e6401192e8625d8816f272 2025-05-03 00:00:25.243452 | 2025-05-03 00:00:25.243543 | LOOP [emit-job-header : Print node information] 2025-05-03 00:00:25.440304 | orchestrator | ok: 2025-05-03 00:00:25.440489 | orchestrator | # Node Information 2025-05-03 00:00:25.440518 | orchestrator | Inventory Hostname: orchestrator 2025-05-03 00:00:25.440538 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-03 00:00:25.440556 | orchestrator | Username: zuul-testbed05 2025-05-03 00:00:25.440572 | orchestrator | Distro: Debian 12.10 2025-05-03 00:00:25.440599 | orchestrator | Provider: static-testbed 2025-05-03 00:00:25.440621 | orchestrator | Label: testbed-orchestrator 2025-05-03 00:00:25.440644 | orchestrator | Product Name: OpenStack Nova 2025-05-03 00:00:25.440667 | orchestrator | Interface IP: 81.163.193.140 2025-05-03 00:00:25.459387 | 2025-05-03 00:00:25.459502 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-03 00:00:26.199640 | orchestrator -> localhost | changed 2025-05-03 00:00:26.210527 | 2025-05-03 00:00:26.210623 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-03 00:00:28.412461 | orchestrator -> localhost | changed 2025-05-03 00:00:28.462665 | 2025-05-03 00:00:28.462788 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-03 00:00:29.303417 | orchestrator -> localhost | ok 2025-05-03 00:00:29.311453 | 2025-05-03 00:00:29.311553 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-03 00:00:29.355629 | orchestrator | ok 2025-05-03 00:00:29.372015 | orchestrator | included: /var/lib/zuul/builds/b5e89fbdb2b248eda6b44d358a1c2c68/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-03 00:00:29.379469 | 2025-05-03 00:00:29.379552 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-03 00:00:30.892818 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-03 00:00:30.893000 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/b5e89fbdb2b248eda6b44d358a1c2c68/work/b5e89fbdb2b248eda6b44d358a1c2c68_id_rsa 2025-05-03 00:00:30.893030 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/b5e89fbdb2b248eda6b44d358a1c2c68/work/b5e89fbdb2b248eda6b44d358a1c2c68_id_rsa.pub 2025-05-03 00:00:30.893052 | orchestrator -> localhost | The key fingerprint is: 2025-05-03 00:00:30.893072 | orchestrator -> localhost | SHA256:iE/QQ0dhky4Q44dGJYfJ8fJea0raWD4wWPIJpmLeAX8 zuul-build-sshkey 2025-05-03 00:00:30.893090 | orchestrator -> localhost | The key's randomart image is: 2025-05-03 00:00:30.893107 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-03 00:00:30.893124 | orchestrator -> localhost | | .**+.*o | 2025-05-03 00:00:30.893140 | orchestrator -> localhost | | o=O o.. | 2025-05-03 00:00:30.893165 | orchestrator -> localhost | | *.=. | 2025-05-03 00:00:30.893181 | orchestrator -> localhost | | .+..*.o. | 2025-05-03 00:00:30.893197 | orchestrator -> localhost | | oo*..+.S | 2025-05-03 00:00:30.893213 | orchestrator -> localhost | |o..o=E . . | 2025-05-03 00:00:30.893233 | orchestrator -> localhost | |+ . oo= o | 2025-05-03 00:00:30.893249 | orchestrator -> localhost | | . . B.o | 2025-05-03 00:00:30.893275 | orchestrator -> localhost | | o +. | 2025-05-03 00:00:30.893292 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-03 00:00:30.893335 | orchestrator -> localhost | ok: Runtime: 0:00:00.271759 2025-05-03 00:00:30.906881 | 2025-05-03 00:00:30.906973 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-03 00:00:30.977535 | orchestrator | ok 2025-05-03 00:00:30.997744 | orchestrator | included: /var/lib/zuul/builds/b5e89fbdb2b248eda6b44d358a1c2c68/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-03 00:00:31.016884 | 2025-05-03 00:00:31.016984 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-03 00:00:31.085049 | orchestrator | skipping: Conditional result was False 2025-05-03 00:00:31.092217 | 2025-05-03 00:00:31.092343 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-03 00:00:31.690323 | orchestrator | changed 2025-05-03 00:00:31.697606 | 2025-05-03 00:00:31.697686 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-03 00:00:31.983659 | orchestrator | ok 2025-05-03 00:00:31.999133 | 2025-05-03 00:00:31.999223 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-03 00:00:32.435923 | orchestrator | ok 2025-05-03 00:00:32.520777 | 2025-05-03 00:00:32.520881 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-03 00:00:32.976363 | orchestrator | ok 2025-05-03 00:00:32.990774 | 2025-05-03 00:00:32.990871 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-03 00:00:33.034648 | orchestrator | skipping: Conditional result was False 2025-05-03 00:00:33.041434 | 2025-05-03 00:00:33.041522 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-03 00:00:33.806116 | orchestrator -> localhost | changed 2025-05-03 00:00:33.820815 | 2025-05-03 00:00:33.820907 | TASK [add-build-sshkey : Add back temp key] 2025-05-03 00:00:34.507500 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/b5e89fbdb2b248eda6b44d358a1c2c68/work/b5e89fbdb2b248eda6b44d358a1c2c68_id_rsa (zuul-build-sshkey) 2025-05-03 00:00:34.507703 | orchestrator -> localhost | ok: Runtime: 0:00:00.044239 2025-05-03 00:00:34.516433 | 2025-05-03 00:00:34.524544 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-03 00:00:35.038149 | orchestrator | ok 2025-05-03 00:00:35.059232 | 2025-05-03 00:00:35.059363 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-03 00:00:35.120986 | orchestrator | skipping: Conditional result was False 2025-05-03 00:00:35.141948 | 2025-05-03 00:00:35.142058 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-03 00:00:35.668416 | orchestrator | ok 2025-05-03 00:00:35.724184 | 2025-05-03 00:00:35.724318 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-03 00:00:35.778991 | orchestrator | ok 2025-05-03 00:00:35.794867 | 2025-05-03 00:00:35.794967 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-03 00:00:36.283601 | orchestrator -> localhost | ok 2025-05-03 00:00:36.294547 | 2025-05-03 00:00:36.294649 | TASK [validate-host : Collect information about the host] 2025-05-03 00:00:37.467697 | orchestrator | ok 2025-05-03 00:00:37.480201 | 2025-05-03 00:00:37.480314 | TASK [validate-host : Sanitize hostname] 2025-05-03 00:00:37.572633 | orchestrator | ok 2025-05-03 00:00:37.588351 | 2025-05-03 00:00:37.588527 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-03 00:00:38.794092 | orchestrator -> localhost | changed 2025-05-03 00:00:38.800646 | 2025-05-03 00:00:38.800731 | TASK [validate-host : Collect information about zuul worker] 2025-05-03 00:00:39.553105 | orchestrator | ok 2025-05-03 00:00:39.566641 | 2025-05-03 00:00:39.566737 | TASK [validate-host : Write out all zuul information for each host] 2025-05-03 00:00:40.528191 | orchestrator -> localhost | changed 2025-05-03 00:00:40.539429 | 2025-05-03 00:00:40.539519 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-03 00:00:40.841489 | orchestrator | ok 2025-05-03 00:00:40.850138 | 2025-05-03 00:00:40.850231 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-03 00:00:59.114975 | orchestrator | changed: 2025-05-03 00:00:59.115267 | orchestrator | .d..t...... src/ 2025-05-03 00:00:59.115345 | orchestrator | .d..t...... src/github.com/ 2025-05-03 00:00:59.115382 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-03 00:00:59.115414 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-03 00:00:59.115444 | orchestrator | RedHat.yml 2025-05-03 00:00:59.132033 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-03 00:00:59.132051 | orchestrator | RedHat.yml 2025-05-03 00:00:59.132103 | orchestrator | = 1.53.0"... 2025-05-03 00:01:14.110880 | orchestrator | 00:01:14.110 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-03 00:01:14.209014 | orchestrator | 00:01:14.208 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-03 00:01:15.800083 | orchestrator | 00:01:15.799 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-03 00:01:17.047457 | orchestrator | 00:01:17.047 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-03 00:01:18.316706 | orchestrator | 00:01:18.316 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-03 00:01:19.317264 | orchestrator | 00:01:19.316 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-03 00:01:20.223719 | orchestrator | 00:01:20.223 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-03 00:01:21.316941 | orchestrator | 00:01:21.316 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-03 00:01:21.317053 | orchestrator | 00:01:21.316 STDOUT terraform: Providers are signed by their developers. 2025-05-03 00:01:21.317079 | orchestrator | 00:01:21.316 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-03 00:01:21.317101 | orchestrator | 00:01:21.316 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-03 00:01:21.317118 | orchestrator | 00:01:21.316 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-03 00:01:21.317137 | orchestrator | 00:01:21.317 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-03 00:01:21.317156 | orchestrator | 00:01:21.317 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-03 00:01:21.317174 | orchestrator | 00:01:21.317 STDOUT terraform: you run "tofu init" in the future. 2025-05-03 00:01:21.317819 | orchestrator | 00:01:21.317 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-03 00:01:21.317880 | orchestrator | 00:01:21.317 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-03 00:01:21.317951 | orchestrator | 00:01:21.317 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-03 00:01:21.317988 | orchestrator | 00:01:21.317 STDOUT terraform: should now work. 2025-05-03 00:01:21.318056 | orchestrator | 00:01:21.317 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-03 00:01:21.318080 | orchestrator | 00:01:21.317 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-03 00:01:21.318099 | orchestrator | 00:01:21.318 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-03 00:01:21.496660 | orchestrator | 00:01:21.496 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-03 00:01:21.707885 | orchestrator | 00:01:21.707 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-03 00:01:21.707999 | orchestrator | 00:01:21.707 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-03 00:01:21.708128 | orchestrator | 00:01:21.707 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-03 00:01:21.708161 | orchestrator | 00:01:21.708 STDOUT terraform: for this configuration. 2025-05-03 00:01:21.976513 | orchestrator | 00:01:21.976 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-03 00:01:22.072154 | orchestrator | 00:01:22.071 STDOUT terraform: ci.auto.tfvars 2025-05-03 00:01:22.279655 | orchestrator | 00:01:22.279 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-03 00:01:23.283356 | orchestrator | 00:01:23.283 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-03 00:01:23.803062 | orchestrator | 00:01:23.802 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-03 00:01:24.011864 | orchestrator | 00:01:24.011 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-03 00:01:24.011986 | orchestrator | 00:01:24.011 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-03 00:01:24.011995 | orchestrator | 00:01:24.011 STDOUT terraform:  + create 2025-05-03 00:01:24.012013 | orchestrator | 00:01:24.011 STDOUT terraform:  <= read (data resources) 2025-05-03 00:01:24.012218 | orchestrator | 00:01:24.011 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-03 00:01:24.012229 | orchestrator | 00:01:24.012 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-03 00:01:24.012251 | orchestrator | 00:01:24.012 STDOUT terraform:  # (config refers to values not yet known) 2025-05-03 00:01:24.012304 | orchestrator | 00:01:24.012 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-03 00:01:24.012343 | orchestrator | 00:01:24.012 STDOUT terraform:  + checksum = (known after apply) 2025-05-03 00:01:24.012394 | orchestrator | 00:01:24.012 STDOUT terraform:  + created_at = (known after apply) 2025-05-03 00:01:24.012439 | orchestrator | 00:01:24.012 STDOUT terraform:  + file = (known after apply) 2025-05-03 00:01:24.012488 | orchestrator | 00:01:24.012 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.012550 | orchestrator | 00:01:24.012 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.012579 | orchestrator | 00:01:24.012 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-03 00:01:24.012637 | orchestrator | 00:01:24.012 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-03 00:01:24.012662 | orchestrator | 00:01:24.012 STDOUT terraform:  + most_recent = true 2025-05-03 00:01:24.012711 | orchestrator | 00:01:24.012 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.012746 | orchestrator | 00:01:24.012 STDOUT terraform:  + protected = (known after apply) 2025-05-03 00:01:24.012816 | orchestrator | 00:01:24.012 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.012848 | orchestrator | 00:01:24.012 STDOUT terraform:  + schema = (known after apply) 2025-05-03 00:01:24.012893 | orchestrator | 00:01:24.012 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-03 00:01:24.012943 | orchestrator | 00:01:24.012 STDOUT terraform:  + tags = (known after apply) 2025-05-03 00:01:24.012998 | orchestrator | 00:01:24.012 STDOUT terraform:  + updated_at = (known after apply) 2025-05-03 00:01:24.013007 | orchestrator | 00:01:24.012 STDOUT terraform:  } 2025-05-03 00:01:24.013158 | orchestrator | 00:01:24.013 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-03 00:01:24.013205 | orchestrator | 00:01:24.013 STDOUT terraform:  # (config refers to values not yet known) 2025-05-03 00:01:24.013262 | orchestrator | 00:01:24.013 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-03 00:01:24.013317 | orchestrator | 00:01:24.013 STDOUT terraform:  + checksum = (known after apply) 2025-05-03 00:01:24.013353 | orchestrator | 00:01:24.013 STDOUT terraform:  + created_at = (known after apply) 2025-05-03 00:01:24.013398 | orchestrator | 00:01:24.013 STDOUT terraform:  + file = (known after apply) 2025-05-03 00:01:24.013445 | orchestrator | 00:01:24.013 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.013488 | orchestrator | 00:01:24.013 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.013540 | orchestrator | 00:01:24.013 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-03 00:01:24.013573 | orchestrator | 00:01:24.013 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-03 00:01:24.013603 | orchestrator | 00:01:24.013 STDOUT terraform:  + most_recent = true 2025-05-03 00:01:24.013646 | orchestrator | 00:01:24.013 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.013689 | orchestrator | 00:01:24.013 STDOUT terraform:  + protected = (known after apply) 2025-05-03 00:01:24.013732 | orchestrator | 00:01:24.013 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.013797 | orchestrator | 00:01:24.013 STDOUT terraform:  + schema = (known after apply) 2025-05-03 00:01:24.013863 | orchestrator | 00:01:24.013 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-03 00:01:24.013910 | orchestrator | 00:01:24.013 STDOUT terraform:  + tags = (known after apply) 2025-05-03 00:01:24.013963 | orchestrator | 00:01:24.013 STDOUT terraform:  + updated_at = (known after apply) 2025-05-03 00:01:24.013971 | orchestrator | 00:01:24.013 STDOUT terraform:  } 2025-05-03 00:01:24.014053 | orchestrator | 00:01:24.013 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-03 00:01:24.014100 | orchestrator | 00:01:24.014 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-03 00:01:24.014155 | orchestrator | 00:01:24.014 STDOUT terraform:  + content = (known after apply) 2025-05-03 00:01:24.014211 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-03 00:01:24.014273 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-03 00:01:24.014320 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-03 00:01:24.014376 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-03 00:01:24.014419 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-03 00:01:24.014488 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-03 00:01:24.014529 | orchestrator | 00:01:24.014 STDOUT terraform:  + directory_permission = "0777" 2025-05-03 00:01:24.014562 | orchestrator | 00:01:24.014 STDOUT terraform:  + file_permission = "0644" 2025-05-03 00:01:24.014612 | orchestrator | 00:01:24.014 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-03 00:01:24.014661 | orchestrator | 00:01:24.014 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.014677 | orchestrator | 00:01:24.014 STDOUT terraform:  } 2025-05-03 00:01:24.014714 | orchestrator | 00:01:24.014 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-03 00:01:24.014747 | orchestrator | 00:01:24.014 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-03 00:01:24.014829 | orchestrator | 00:01:24.014 STDOUT terraform:  + content = (known after apply) 2025-05-03 00:01:24.014859 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-03 00:01:24.014906 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-03 00:01:24.014955 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-03 00:01:24.015009 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-03 00:01:24.015049 | orchestrator | 00:01:24.014 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-03 00:01:24.015102 | orchestrator | 00:01:24.015 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-03 00:01:24.015128 | orchestrator | 00:01:24.015 STDOUT terraform:  + directory_permission = "0777" 2025-05-03 00:01:24.015160 | orchestrator | 00:01:24.015 STDOUT terraform:  + file_permission = "0644" 2025-05-03 00:01:24.015208 | orchestrator | 00:01:24.015 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-03 00:01:24.015266 | orchestrator | 00:01:24.015 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.015299 | orchestrator | 00:01:24.015 STDOUT terraform:  } 2025-05-03 00:01:24.015307 | orchestrator | 00:01:24.015 STDOUT terraform:  # local_file.inventory will be created 2025-05-03 00:01:24.015335 | orchestrator | 00:01:24.015 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-03 00:01:24.015385 | orchestrator | 00:01:24.015 STDOUT terraform:  + content = (known after apply) 2025-05-03 00:01:24.015442 | orchestrator | 00:01:24.015 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-03 00:01:24.015481 | orchestrator | 00:01:24.015 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-03 00:01:24.015528 | orchestrator | 00:01:24.015 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-03 00:01:24.015576 | orchestrator | 00:01:24.015 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-03 00:01:24.015623 | orchestrator | 00:01:24.015 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-03 00:01:24.015677 | orchestrator | 00:01:24.015 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-03 00:01:24.015701 | orchestrator | 00:01:24.015 STDOUT terraform:  + directory_permission = "0777" 2025-05-03 00:01:24.015733 | orchestrator | 00:01:24.015 STDOUT terraform:  + file_permission = "0644" 2025-05-03 00:01:24.015776 | orchestrator | 00:01:24.015 STDOUT terraform:  + filename = "inventory.ci" 2025-05-03 00:01:24.015859 | orchestrator | 00:01:24.015 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.015878 | orchestrator | 00:01:24.015 STDOUT terraform:  } 2025-05-03 00:01:24.015925 | orchestrator | 00:01:24.015 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-03 00:01:24.015958 | orchestrator | 00:01:24.015 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-03 00:01:24.016011 | orchestrator | 00:01:24.015 STDOUT terraform:  + content = (sensitive value) 2025-05-03 00:01:24.016048 | orchestrator | 00:01:24.015 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-03 00:01:24.016104 | orchestrator | 00:01:24.016 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-03 00:01:24.016140 | orchestrator | 00:01:24.016 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-03 00:01:24.016198 | orchestrator | 00:01:24.016 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-03 00:01:24.016239 | orchestrator | 00:01:24.016 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-03 00:01:24.016294 | orchestrator | 00:01:24.016 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-03 00:01:24.016321 | orchestrator | 00:01:24.016 STDOUT terraform:  + directory_permission = "0700" 2025-05-03 00:01:24.016353 | orchestrator | 00:01:24.016 STDOUT terraform:  + file_permission = "0600" 2025-05-03 00:01:24.016393 | orchestrator | 00:01:24.016 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-03 00:01:24.016440 | orchestrator | 00:01:24.016 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.016464 | orchestrator | 00:01:24.016 STDOUT terraform:  } 2025-05-03 00:01:24.016497 | orchestrator | 00:01:24.016 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-03 00:01:24.016534 | orchestrator | 00:01:24.016 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-03 00:01:24.016572 | orchestrator | 00:01:24.016 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.016591 | orchestrator | 00:01:24.016 STDOUT terraform:  } 2025-05-03 00:01:24.016655 | orchestrator | 00:01:24.016 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-03 00:01:24.016721 | orchestrator | 00:01:24.016 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-03 00:01:24.016754 | orchestrator | 00:01:24.016 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.016801 | orchestrator | 00:01:24.016 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.016835 | orchestrator | 00:01:24.016 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.016874 | orchestrator | 00:01:24.016 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.016916 | orchestrator | 00:01:24.016 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.016972 | orchestrator | 00:01:24.016 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-03 00:01:24.017003 | orchestrator | 00:01:24.016 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.017032 | orchestrator | 00:01:24.016 STDOUT terraform:  + size = 80 2025-05-03 00:01:24.017070 | orchestrator | 00:01:24.017 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.017139 | orchestrator | 00:01:24.017 STDOUT terraform:  } 2025-05-03 00:01:24.017148 | orchestrator | 00:01:24.017 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-03 00:01:24.017188 | orchestrator | 00:01:24.017 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-03 00:01:24.017235 | orchestrator | 00:01:24.017 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.017253 | orchestrator | 00:01:24.017 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.017293 | orchestrator | 00:01:24.017 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.017339 | orchestrator | 00:01:24.017 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.017371 | orchestrator | 00:01:24.017 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.017419 | orchestrator | 00:01:24.017 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-03 00:01:24.017457 | orchestrator | 00:01:24.017 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.017491 | orchestrator | 00:01:24.017 STDOUT terraform:  + size = 80 2025-05-03 00:01:24.017511 | orchestrator | 00:01:24.017 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.017529 | orchestrator | 00:01:24.017 STDOUT terraform:  } 2025-05-03 00:01:24.017587 | orchestrator | 00:01:24.017 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-03 00:01:24.017644 | orchestrator | 00:01:24.017 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-03 00:01:24.017682 | orchestrator | 00:01:24.017 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.017719 | orchestrator | 00:01:24.017 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.017750 | orchestrator | 00:01:24.017 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.017823 | orchestrator | 00:01:24.017 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.017843 | orchestrator | 00:01:24.017 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.017893 | orchestrator | 00:01:24.017 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-03 00:01:24.017934 | orchestrator | 00:01:24.017 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.017958 | orchestrator | 00:01:24.017 STDOUT terraform:  + size = 80 2025-05-03 00:01:24.017984 | orchestrator | 00:01:24.017 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.018001 | orchestrator | 00:01:24.017 STDOUT terraform:  } 2025-05-03 00:01:24.018119 | orchestrator | 00:01:24.018 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-03 00:01:24.018175 | orchestrator | 00:01:24.018 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-03 00:01:24.018216 | orchestrator | 00:01:24.018 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.018243 | orchestrator | 00:01:24.018 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.018284 | orchestrator | 00:01:24.018 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.018323 | orchestrator | 00:01:24.018 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.018361 | orchestrator | 00:01:24.018 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.018410 | orchestrator | 00:01:24.018 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-03 00:01:24.018450 | orchestrator | 00:01:24.018 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.018477 | orchestrator | 00:01:24.018 STDOUT terraform:  + size = 80 2025-05-03 00:01:24.018506 | orchestrator | 00:01:24.018 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.018522 | orchestrator | 00:01:24.018 STDOUT terraform:  } 2025-05-03 00:01:24.018581 | orchestrator | 00:01:24.018 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-03 00:01:24.018641 | orchestrator | 00:01:24.018 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-03 00:01:24.018677 | orchestrator | 00:01:24.018 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.018703 | orchestrator | 00:01:24.018 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.018743 | orchestrator | 00:01:24.018 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.018781 | orchestrator | 00:01:24.018 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.018853 | orchestrator | 00:01:24.018 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.018901 | orchestrator | 00:01:24.018 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-03 00:01:24.018940 | orchestrator | 00:01:24.018 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.018966 | orchestrator | 00:01:24.018 STDOUT terraform:  + size = 80 2025-05-03 00:01:24.019001 | orchestrator | 00:01:24.018 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.019019 | orchestrator | 00:01:24.018 STDOUT terraform:  } 2025-05-03 00:01:24.019074 | orchestrator | 00:01:24.019 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-03 00:01:24.019127 | orchestrator | 00:01:24.019 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-03 00:01:24.019163 | orchestrator | 00:01:24.019 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.019187 | orchestrator | 00:01:24.019 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.019223 | orchestrator | 00:01:24.019 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.019259 | orchestrator | 00:01:24.019 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.019294 | orchestrator | 00:01:24.019 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.019338 | orchestrator | 00:01:24.019 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-03 00:01:24.019374 | orchestrator | 00:01:24.019 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.019397 | orchestrator | 00:01:24.019 STDOUT terraform:  + size = 80 2025-05-03 00:01:24.019421 | orchestrator | 00:01:24.019 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.019429 | orchestrator | 00:01:24.019 STDOUT terraform:  } 2025-05-03 00:01:24.019485 | orchestrator | 00:01:24.019 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-03 00:01:24.019536 | orchestrator | 00:01:24.019 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-03 00:01:24.019570 | orchestrator | 00:01:24.019 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.019595 | orchestrator | 00:01:24.019 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.019632 | orchestrator | 00:01:24.019 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.019668 | orchestrator | 00:01:24.019 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.019703 | orchestrator | 00:01:24.019 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.019748 | orchestrator | 00:01:24.019 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-03 00:01:24.019783 | orchestrator | 00:01:24.019 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.019823 | orchestrator | 00:01:24.019 STDOUT terraform:  + size = 80 2025-05-03 00:01:24.019841 | orchestrator | 00:01:24.019 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.019857 | orchestrator | 00:01:24.019 STDOUT terraform:  } 2025-05-03 00:01:24.019910 | orchestrator | 00:01:24.019 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-03 00:01:24.019957 | orchestrator | 00:01:24.019 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.019992 | orchestrator | 00:01:24.019 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.020015 | orchestrator | 00:01:24.019 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.020054 | orchestrator | 00:01:24.020 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.020086 | orchestrator | 00:01:24.020 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.020130 | orchestrator | 00:01:24.020 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-05-03 00:01:24.020165 | orchestrator | 00:01:24.020 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.020190 | orchestrator | 00:01:24.020 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.020214 | orchestrator | 00:01:24.020 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.020230 | orchestrator | 00:01:24.020 STDOUT terraform:  } 2025-05-03 00:01:24.020281 | orchestrator | 00:01:24.020 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-03 00:01:24.020330 | orchestrator | 00:01:24.020 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.020363 | orchestrator | 00:01:24.020 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.020387 | orchestrator | 00:01:24.020 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.020423 | orchestrator | 00:01:24.020 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.020458 | orchestrator | 00:01:24.020 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.020499 | orchestrator | 00:01:24.020 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-05-03 00:01:24.020536 | orchestrator | 00:01:24.020 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.020560 | orchestrator | 00:01:24.020 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.020584 | orchestrator | 00:01:24.020 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.020601 | orchestrator | 00:01:24.020 STDOUT terraform:  } 2025-05-03 00:01:24.020652 | orchestrator | 00:01:24.020 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-03 00:01:24.020704 | orchestrator | 00:01:24.020 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.020738 | orchestrator | 00:01:24.020 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.020765 | orchestrator | 00:01:24.020 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.020813 | orchestrator | 00:01:24.020 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.020848 | orchestrator | 00:01:24.020 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.020889 | orchestrator | 00:01:24.020 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-05-03 00:01:24.020924 | orchestrator | 00:01:24.020 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.020949 | orchestrator | 00:01:24.020 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.020973 | orchestrator | 00:01:24.020 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.020989 | orchestrator | 00:01:24.020 STDOUT terraform:  } 2025-05-03 00:01:24.021040 | orchestrator | 00:01:24.020 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-03 00:01:24.021088 | orchestrator | 00:01:24.021 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.021123 | orchestrator | 00:01:24.021 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.021147 | orchestrator | 00:01:24.021 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.021184 | orchestrator | 00:01:24.021 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.021219 | orchestrator | 00:01:24.021 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.021263 | orchestrator | 00:01:24.021 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-03 00:01:24.021297 | orchestrator | 00:01:24.021 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.021323 | orchestrator | 00:01:24.021 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.021346 | orchestrator | 00:01:24.021 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.021362 | orchestrator | 00:01:24.021 STDOUT terraform:  } 2025-05-03 00:01:24.021414 | orchestrator | 00:01:24.021 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-03 00:01:24.021461 | orchestrator | 00:01:24.021 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.021497 | orchestrator | 00:01:24.021 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.021522 | orchestrator | 00:01:24.021 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.021557 | orchestrator | 00:01:24.021 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.021592 | orchestrator | 00:01:24.021 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.021641 | orchestrator | 00:01:24.021 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-03 00:01:24.021694 | orchestrator | 00:01:24.021 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.021729 | orchestrator | 00:01:24.021 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.021764 | orchestrator | 00:01:24.021 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.021816 | orchestrator | 00:01:24.021 STDOUT terraform:  } 2025-05-03 00:01:24.021898 | orchestrator | 00:01:24.021 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-03 00:01:24.021952 | orchestrator | 00:01:24.021 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.021986 | orchestrator | 00:01:24.021 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.022009 | orchestrator | 00:01:24.021 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.022095 | orchestrator | 00:01:24.022 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.022135 | orchestrator | 00:01:24.022 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.022167 | orchestrator | 00:01:24.022 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-03 00:01:24.022199 | orchestrator | 00:01:24.022 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.022221 | orchestrator | 00:01:24.022 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.022243 | orchestrator | 00:01:24.022 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.022251 | orchestrator | 00:01:24.022 STDOUT terraform:  } 2025-05-03 00:01:24.022303 | orchestrator | 00:01:24.022 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-03 00:01:24.022349 | orchestrator | 00:01:24.022 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.022381 | orchestrator | 00:01:24.022 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.022403 | orchestrator | 00:01:24.022 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.022437 | orchestrator | 00:01:24.022 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.022469 | orchestrator | 00:01:24.022 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.022508 | orchestrator | 00:01:24.022 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-05-03 00:01:24.022539 | orchestrator | 00:01:24.022 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.022562 | orchestrator | 00:01:24.022 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.022587 | orchestrator | 00:01:24.022 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.022594 | orchestrator | 00:01:24.022 STDOUT terraform:  } 2025-05-03 00:01:24.022642 | orchestrator | 00:01:24.022 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-03 00:01:24.022689 | orchestrator | 00:01:24.022 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.022719 | orchestrator | 00:01:24.022 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.022740 | orchestrator | 00:01:24.022 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.022795 | orchestrator | 00:01:24.022 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.022819 | orchestrator | 00:01:24.022 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.022858 | orchestrator | 00:01:24.022 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-05-03 00:01:24.022891 | orchestrator | 00:01:24.022 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.022912 | orchestrator | 00:01:24.022 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.022934 | orchestrator | 00:01:24.022 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.022949 | orchestrator | 00:01:24.022 STDOUT terraform:  } 2025-05-03 00:01:24.022996 | orchestrator | 00:01:24.022 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-03 00:01:24.023040 | orchestrator | 00:01:24.022 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.023073 | orchestrator | 00:01:24.023 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.023093 | orchestrator | 00:01:24.023 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.023125 | orchestrator | 00:01:24.023 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.023158 | orchestrator | 00:01:24.023 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.023196 | orchestrator | 00:01:24.023 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-05-03 00:01:24.023228 | orchestrator | 00:01:24.023 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.023250 | orchestrator | 00:01:24.023 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.023272 | orchestrator | 00:01:24.023 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.023280 | orchestrator | 00:01:24.023 STDOUT terraform:  } 2025-05-03 00:01:24.023328 | orchestrator | 00:01:24.023 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-05-03 00:01:24.023373 | orchestrator | 00:01:24.023 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.023405 | orchestrator | 00:01:24.023 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.023426 | orchestrator | 00:01:24.023 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.023458 | orchestrator | 00:01:24.023 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.023493 | orchestrator | 00:01:24.023 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.023530 | orchestrator | 00:01:24.023 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-05-03 00:01:24.023562 | orchestrator | 00:01:24.023 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.023591 | orchestrator | 00:01:24.023 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.023599 | orchestrator | 00:01:24.023 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.023615 | orchestrator | 00:01:24.023 STDOUT terraform:  } 2025-05-03 00:01:24.023663 | orchestrator | 00:01:24.023 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-05-03 00:01:24.023706 | orchestrator | 00:01:24.023 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.023740 | orchestrator | 00:01:24.023 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.023761 | orchestrator | 00:01:24.023 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.023824 | orchestrator | 00:01:24.023 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.023858 | orchestrator | 00:01:24.023 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.023893 | orchestrator | 00:01:24.023 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-05-03 00:01:24.023924 | orchestrator | 00:01:24.023 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.023946 | orchestrator | 00:01:24.023 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.023967 | orchestrator | 00:01:24.023 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.023975 | orchestrator | 00:01:24.023 STDOUT terraform:  } 2025-05-03 00:01:24.024022 | orchestrator | 00:01:24.023 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-05-03 00:01:24.024065 | orchestrator | 00:01:24.024 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.024095 | orchestrator | 00:01:24.024 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.024123 | orchestrator | 00:01:24.024 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.024146 | orchestrator | 00:01:24.024 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.024178 | orchestrator | 00:01:24.024 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.024217 | orchestrator | 00:01:24.024 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-05-03 00:01:24.024248 | orchestrator | 00:01:24.024 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.024269 | orchestrator | 00:01:24.024 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.024290 | orchestrator | 00:01:24.024 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.024297 | orchestrator | 00:01:24.024 STDOUT terraform:  } 2025-05-03 00:01:24.024345 | orchestrator | 00:01:24.024 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-05-03 00:01:24.024388 | orchestrator | 00:01:24.024 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.024418 | orchestrator | 00:01:24.024 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.024440 | orchestrator | 00:01:24.024 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.024471 | orchestrator | 00:01:24.024 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.024502 | orchestrator | 00:01:24.024 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.024541 | orchestrator | 00:01:24.024 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-05-03 00:01:24.024572 | orchestrator | 00:01:24.024 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.024591 | orchestrator | 00:01:24.024 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.024613 | orchestrator | 00:01:24.024 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.024620 | orchestrator | 00:01:24.024 STDOUT terraform:  } 2025-05-03 00:01:24.024684 | orchestrator | 00:01:24.024 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-05-03 00:01:24.024733 | orchestrator | 00:01:24.024 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.024763 | orchestrator | 00:01:24.024 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.024798 | orchestrator | 00:01:24.024 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.024828 | orchestrator | 00:01:24.024 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.024858 | orchestrator | 00:01:24.024 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.024897 | orchestrator | 00:01:24.024 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-05-03 00:01:24.024929 | orchestrator | 00:01:24.024 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.024951 | orchestrator | 00:01:24.024 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.024973 | orchestrator | 00:01:24.024 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.024989 | orchestrator | 00:01:24.024 STDOUT terraform:  } 2025-05-03 00:01:24.025034 | orchestrator | 00:01:24.024 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-05-03 00:01:24.025078 | orchestrator | 00:01:24.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.025109 | orchestrator | 00:01:24.025 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.025130 | orchestrator | 00:01:24.025 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.025164 | orchestrator | 00:01:24.025 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.025196 | orchestrator | 00:01:24.025 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.025233 | orchestrator | 00:01:24.025 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-05-03 00:01:24.025264 | orchestrator | 00:01:24.025 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.025286 | orchestrator | 00:01:24.025 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.025307 | orchestrator | 00:01:24.025 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.025315 | orchestrator | 00:01:24.025 STDOUT terraform:  } 2025-05-03 00:01:24.025362 | orchestrator | 00:01:24.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-05-03 00:01:24.025404 | orchestrator | 00:01:24.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.025437 | orchestrator | 00:01:24.025 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.025456 | orchestrator | 00:01:24.025 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.025487 | orchestrator | 00:01:24.025 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.025518 | orchestrator | 00:01:24.025 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.025554 | orchestrator | 00:01:24.025 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-05-03 00:01:24.025590 | orchestrator | 00:01:24.025 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.025611 | orchestrator | 00:01:24.025 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.025633 | orchestrator | 00:01:24.025 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.025640 | orchestrator | 00:01:24.025 STDOUT terraform:  } 2025-05-03 00:01:24.025687 | orchestrator | 00:01:24.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-05-03 00:01:24.025730 | orchestrator | 00:01:24.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.025760 | orchestrator | 00:01:24.025 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.025876 | orchestrator | 00:01:24.025 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.025886 | orchestrator | 00:01:24.025 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.025891 | orchestrator | 00:01:24.025 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.025898 | orchestrator | 00:01:24.025 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-05-03 00:01:24.025905 | orchestrator | 00:01:24.025 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.025930 | orchestrator | 00:01:24.025 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.025951 | orchestrator | 00:01:24.025 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.025965 | orchestrator | 00:01:24.025 STDOUT terraform:  } 2025-05-03 00:01:24.026011 | orchestrator | 00:01:24.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-05-03 00:01:24.026068 | orchestrator | 00:01:24.026 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-03 00:01:24.026099 | orchestrator | 00:01:24.026 STDOUT terraform:  + attachment = (known after apply) 2025-05-03 00:01:24.026119 | orchestrator | 00:01:24.026 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.026150 | orchestrator | 00:01:24.026 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.026181 | orchestrator | 00:01:24.026 STDOUT terraform:  + metadata = (known after apply) 2025-05-03 00:01:24.026218 | orchestrator | 00:01:24.026 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-05-03 00:01:24.026249 | orchestrator | 00:01:24.026 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.026269 | orchestrator | 00:01:24.026 STDOUT terraform:  + size = 20 2025-05-03 00:01:24.026291 | orchestrator | 00:01:24.026 STDOUT terraform:  + volume_type = "ssd" 2025-05-03 00:01:24.026298 | orchestrator | 00:01:24.026 STDOUT terraform:  } 2025-05-03 00:01:24.026342 | orchestrator | 00:01:24.026 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-03 00:01:24.026385 | orchestrator | 00:01:24.026 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-03 00:01:24.026420 | orchestrator | 00:01:24.026 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-03 00:01:24.026455 | orchestrator | 00:01:24.026 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-03 00:01:24.026490 | orchestrator | 00:01:24.026 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-03 00:01:24.026524 | orchestrator | 00:01:24.026 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.026549 | orchestrator | 00:01:24.026 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.026572 | orchestrator | 00:01:24.026 STDOUT terraform:  + config_drive = true 2025-05-03 00:01:24.026607 | orchestrator | 00:01:24.026 STDOUT terraform:  + created = (known after apply) 2025-05-03 00:01:24.026643 | orchestrator | 00:01:24.026 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-03 00:01:24.026675 | orchestrator | 00:01:24.026 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-03 00:01:24.026698 | orchestrator | 00:01:24.026 STDOUT terraform:  + force_delete = false 2025-05-03 00:01:24.026733 | orchestrator | 00:01:24.026 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.026768 | orchestrator | 00:01:24.026 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.026833 | orchestrator | 00:01:24.026 STDOUT terraform:  + image_name = (known after apply) 2025-05-03 00:01:24.026857 | orchestrator | 00:01:24.026 STDOUT terraform:  + key_pair = "testbed" 2025-05-03 00:01:24.026888 | orchestrator | 00:01:24.026 STDOUT terraform:  + name = "testbed-manager" 2025-05-03 00:01:24.026914 | orchestrator | 00:01:24.026 STDOUT terraform:  + power_state = "active" 2025-05-03 00:01:24.026949 | orchestrator | 00:01:24.026 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.026983 | orchestrator | 00:01:24.026 STDOUT terraform:  + security_groups = (known after apply) 2025-05-03 00:01:24.027006 | orchestrator | 00:01:24.026 STDOUT terraform:  + stop_before_destroy = false 2025-05-03 00:01:24.027041 | orchestrator | 00:01:24.027 STDOUT terraform:  + updated = (known after apply) 2025-05-03 00:01:24.027078 | orchestrator | 00:01:24.027 STDOUT terraform:  + user_data = (known after apply) 2025-05-03 00:01:24.027093 | orchestrator | 00:01:24.027 STDOUT terraform:  + block_device { 2025-05-03 00:01:24.027118 | orchestrator | 00:01:24.027 STDOUT terraform:  + boot_index = 0 2025-05-03 00:01:24.027146 | orchestrator | 00:01:24.027 STDOUT terraform:  + delete_on_termination = false 2025-05-03 00:01:24.027178 | orchestrator | 00:01:24.027 STDOUT terraform:  + destination_type = "volume" 2025-05-03 00:01:24.027205 | orchestrator | 00:01:24.027 STDOUT terraform:  + multiattach = false 2025-05-03 00:01:24.027235 | orchestrator | 00:01:24.027 STDOUT terraform:  + source_type = "volume" 2025-05-03 00:01:24.027274 | orchestrator | 00:01:24.027 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.027293 | orchestrator | 00:01:24.027 STDOUT terraform:  } 2025-05-03 00:01:24.027300 | orchestrator | 00:01:24.027 STDOUT terraform:  + network { 2025-05-03 00:01:24.027320 | orchestrator | 00:01:24.027 STDOUT terraform:  + access_network = false 2025-05-03 00:01:24.027351 | orchestrator | 00:01:24.027 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-03 00:01:24.027383 | orchestrator | 00:01:24.027 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-03 00:01:24.027415 | orchestrator | 00:01:24.027 STDOUT terraform:  + mac = (known after apply) 2025-05-03 00:01:24.027447 | orchestrator | 00:01:24.027 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.027478 | orchestrator | 00:01:24.027 STDOUT terraform:  + port = (known after apply) 2025-05-03 00:01:24.027511 | orchestrator | 00:01:24.027 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.027528 | orchestrator | 00:01:24.027 STDOUT terraform:  } 2025-05-03 00:01:24.027543 | orchestrator | 00:01:24.027 STDOUT terraform:  } 2025-05-03 00:01:24.027587 | orchestrator | 00:01:24.027 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-03 00:01:24.027628 | orchestrator | 00:01:24.027 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-03 00:01:24.027663 | orchestrator | 00:01:24.027 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-03 00:01:24.027699 | orchestrator | 00:01:24.027 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-03 00:01:24.027734 | orchestrator | 00:01:24.027 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-03 00:01:24.027770 | orchestrator | 00:01:24.027 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.027819 | orchestrator | 00:01:24.027 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.027827 | orchestrator | 00:01:24.027 STDOUT terraform:  + config_drive = true 2025-05-03 00:01:24.027860 | orchestrator | 00:01:24.027 STDOUT terraform:  + created = (known after apply) 2025-05-03 00:01:24.027897 | orchestrator | 00:01:24.027 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-03 00:01:24.027928 | orchestrator | 00:01:24.027 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-03 00:01:24.027958 | orchestrator | 00:01:24.027 STDOUT terraform:  + force_delete = false 2025-05-03 00:01:24.027996 | orchestrator | 00:01:24.027 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.028062 | orchestrator | 00:01:24.028 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.028091 | orchestrator | 00:01:24.028 STDOUT terraform:  + image_name = (known after apply) 2025-05-03 00:01:24.028119 | orchestrator | 00:01:24.028 STDOUT terraform:  + key_pair = "testbed" 2025-05-03 00:01:24.028160 | orchestrator | 00:01:24.028 STDOUT terraform:  + name = "testbed-node-0" 2025-05-03 00:01:24.028179 | orchestrator | 00:01:24.028 STDOUT terraform:  + power_state = "active" 2025-05-03 00:01:24.028214 | orchestrator | 00:01:24.028 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.028250 | orchestrator | 00:01:24.028 STDOUT terraform:  + security_groups = (known after apply) 2025-05-03 00:01:24.028274 | orchestrator | 00:01:24.028 STDOUT terraform:  + stop_before_destroy = false 2025-05-03 00:01:24.028318 | orchestrator | 00:01:24.028 STDOUT terraform:  + updated = (known after apply) 2025-05-03 00:01:24.028362 | orchestrator | 00:01:24.028 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-03 00:01:24.028388 | orchestrator | 00:01:24.028 STDOUT terraform:  + block_device { 2025-05-03 00:01:24.028406 | orchestrator | 00:01:24.028 STDOUT terraform:  + boot_index = 0 2025-05-03 00:01:24.028433 | orchestrator | 00:01:24.028 STDOUT terraform:  + delete_on_termination = false 2025-05-03 00:01:24.028462 | orchestrator | 00:01:24.028 STDOUT terraform:  + destination_type = "volume" 2025-05-03 00:01:24.028491 | orchestrator | 00:01:24.028 STDOUT terraform:  + multiattach = false 2025-05-03 00:01:24.028522 | orchestrator | 00:01:24.028 STDOUT terraform:  + source_type = "volume" 2025-05-03 00:01:24.028566 | orchestrator | 00:01:24.028 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.028574 | orchestrator | 00:01:24.028 STDOUT terraform:  } 2025-05-03 00:01:24.028581 | orchestrator | 00:01:24.028 STDOUT terraform:  + network { 2025-05-03 00:01:24.028605 | orchestrator | 00:01:24.028 STDOUT terraform:  + access_network = false 2025-05-03 00:01:24.028647 | orchestrator | 00:01:24.028 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-03 00:01:24.028668 | orchestrator | 00:01:24.028 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-03 00:01:24.028699 | orchestrator | 00:01:24.028 STDOUT terraform:  + mac = (known after apply) 2025-05-03 00:01:24.028732 | orchestrator | 00:01:24.028 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.028766 | orchestrator | 00:01:24.028 STDOUT terraform:  + port = (known after apply) 2025-05-03 00:01:24.028836 | orchestrator | 00:01:24.028 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.028869 | orchestrator | 00:01:24.028 STDOUT terraform:  } 2025-05-03 00:01:24.028875 | orchestrator | 00:01:24.028 STDOUT terraform:  } 2025-05-03 00:01:24.028881 | orchestrator | 00:01:24.028 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-03 00:01:24.028911 | orchestrator | 00:01:24.028 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-03 00:01:24.028947 | orchestrator | 00:01:24.028 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-03 00:01:24.028980 | orchestrator | 00:01:24.028 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-03 00:01:24.029017 | orchestrator | 00:01:24.028 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-03 00:01:24.029051 | orchestrator | 00:01:24.029 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.029083 | orchestrator | 00:01:24.029 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.029090 | orchestrator | 00:01:24.029 STDOUT terraform:  + config_drive = true 2025-05-03 00:01:24.029130 | orchestrator | 00:01:24.029 STDOUT terraform:  + created = (known after apply) 2025-05-03 00:01:24.029164 | orchestrator | 00:01:24.029 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-03 00:01:24.029197 | orchestrator | 00:01:24.029 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-03 00:01:24.029217 | orchestrator | 00:01:24.029 STDOUT terraform:  + force_delete = false 2025-05-03 00:01:24.029261 | orchestrator | 00:01:24.029 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.029289 | orchestrator | 00:01:24.029 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.029323 | orchestrator | 00:01:24.029 STDOUT terraform:  + image_name = (known after apply) 2025-05-03 00:01:24.029357 | orchestrator | 00:01:24.029 STDOUT terraform:  + key_pair = "testbed" 2025-05-03 00:01:24.029381 | orchestrator | 00:01:24.029 STDOUT terraform:  + name = "testbed-node-1" 2025-05-03 00:01:24.029406 | orchestrator | 00:01:24.029 STDOUT terraform:  + power_state = "active" 2025-05-03 00:01:24.029442 | orchestrator | 00:01:24.029 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.029476 | orchestrator | 00:01:24.029 STDOUT terraform:  + security_groups = (known after apply) 2025-05-03 00:01:24.029508 | orchestrator | 00:01:24.029 STDOUT terraform:  + stop_before_destroy = false 2025-05-03 00:01:24.029537 | orchestrator | 00:01:24.029 STDOUT terraform:  + updated = (known after apply) 2025-05-03 00:01:24.029593 | orchestrator | 00:01:24.029 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-03 00:01:24.029601 | orchestrator | 00:01:24.029 STDOUT terraform:  + block_device { 2025-05-03 00:01:24.029624 | orchestrator | 00:01:24.029 STDOUT terraform:  + boot_index = 0 2025-05-03 00:01:24.029652 | orchestrator | 00:01:24.029 STDOUT terraform:  + delete_on_termination = false 2025-05-03 00:01:24.029690 | orchestrator | 00:01:24.029 STDOUT terraform:  + destination_type = "volume" 2025-05-03 00:01:24.029711 | orchestrator | 00:01:24.029 STDOUT terraform:  + multiattach = false 2025-05-03 00:01:24.029742 | orchestrator | 00:01:24.029 STDOUT terraform:  + source_type = "volume" 2025-05-03 00:01:24.029796 | orchestrator | 00:01:24.029 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.029804 | orchestrator | 00:01:24.029 STDOUT terraform:  } 2025-05-03 00:01:24.029811 | orchestrator | 00:01:24.029 STDOUT terraform:  + network { 2025-05-03 00:01:24.029835 | orchestrator | 00:01:24.029 STDOUT terraform:  + access_network = false 2025-05-03 00:01:24.029866 | orchestrator | 00:01:24.029 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-03 00:01:24.029896 | orchestrator | 00:01:24.029 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-03 00:01:24.029937 | orchestrator | 00:01:24.029 STDOUT terraform:  + mac = (known after apply) 2025-05-03 00:01:24.029961 | orchestrator | 00:01:24.029 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.029992 | orchestrator | 00:01:24.029 STDOUT terraform:  + port = (known after apply) 2025-05-03 00:01:24.030039 | orchestrator | 00:01:24.029 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.030047 | orchestrator | 00:01:24.030 STDOUT terraform:  } 2025-05-03 00:01:24.030054 | orchestrator | 00:01:24.030 STDOUT terraform:  } 2025-05-03 00:01:24.030106 | orchestrator | 00:01:24.030 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-03 00:01:24.030141 | orchestrator | 00:01:24.030 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-03 00:01:24.030176 | orchestrator | 00:01:24.030 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-03 00:01:24.030212 | orchestrator | 00:01:24.030 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-03 00:01:24.030250 | orchestrator | 00:01:24.030 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-03 00:01:24.030282 | orchestrator | 00:01:24.030 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.030305 | orchestrator | 00:01:24.030 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.030326 | orchestrator | 00:01:24.030 STDOUT terraform:  + config_drive = true 2025-05-03 00:01:24.030362 | orchestrator | 00:01:24.030 STDOUT terraform:  + created = (known after apply) 2025-05-03 00:01:24.030397 | orchestrator | 00:01:24.030 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-03 00:01:24.030435 | orchestrator | 00:01:24.030 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-03 00:01:24.030451 | orchestrator | 00:01:24.030 STDOUT terraform:  + force_delete = false 2025-05-03 00:01:24.030487 | orchestrator | 00:01:24.030 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.030523 | orchestrator | 00:01:24.030 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.030559 | orchestrator | 00:01:24.030 STDOUT terraform:  + image_name = (known after apply) 2025-05-03 00:01:24.030593 | orchestrator | 00:01:24.030 STDOUT terraform:  + key_pair = "testbed" 2025-05-03 00:01:24.030617 | orchestrator | 00:01:24.030 STDOUT terraform:  + name = "testbed-node-2" 2025-05-03 00:01:24.030641 | orchestrator | 00:01:24.030 STDOUT terraform:  + power_state = "active" 2025-05-03 00:01:24.030677 | orchestrator | 00:01:24.030 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.030711 | orchestrator | 00:01:24.030 STDOUT terraform:  + security_groups = (known after apply) 2025-05-03 00:01:24.030743 | orchestrator | 00:01:24.030 STDOUT terraform:  + stop_before_destroy = false 2025-05-03 00:01:24.030771 | orchestrator | 00:01:24.030 STDOUT terraform:  + updated = (known after apply) 2025-05-03 00:01:24.030849 | orchestrator | 00:01:24.030 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-03 00:01:24.030865 | orchestrator | 00:01:24.030 STDOUT terraform:  + block_device { 2025-05-03 00:01:24.030889 | orchestrator | 00:01:24.030 STDOUT terraform:  + boot_index = 0 2025-05-03 00:01:24.030917 | orchestrator | 00:01:24.030 STDOUT terraform:  + delete_on_termination = false 2025-05-03 00:01:24.030947 | orchestrator | 00:01:24.030 STDOUT terraform:  + destination_type = "volume" 2025-05-03 00:01:24.030984 | orchestrator | 00:01:24.030 STDOUT terraform:  + multiattach = false 2025-05-03 00:01:24.031008 | orchestrator | 00:01:24.030 STDOUT terraform:  + source_type = "volume" 2025-05-03 00:01:24.031045 | orchestrator | 00:01:24.031 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.031068 | orchestrator | 00:01:24.031 STDOUT terraform:  } 2025-05-03 00:01:24.031075 | orchestrator | 00:01:24.031 STDOUT terraform:  + network { 2025-05-03 00:01:24.031094 | orchestrator | 00:01:24.031 STDOUT terraform:  + access_network = false 2025-05-03 00:01:24.031126 | orchestrator | 00:01:24.031 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-03 00:01:24.031169 | orchestrator | 00:01:24.031 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-03 00:01:24.031187 | orchestrator | 00:01:24.031 STDOUT terraform:  + mac = (known after apply) 2025-05-03 00:01:24.031220 | orchestrator | 00:01:24.031 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.031252 | orchestrator | 00:01:24.031 STDOUT terraform:  + port = (known after apply) 2025-05-03 00:01:24.031283 | orchestrator | 00:01:24.031 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.031298 | orchestrator | 00:01:24.031 STDOUT terraform:  } 2025-05-03 00:01:24.031305 | orchestrator | 00:01:24.031 STDOUT terraform:  } 2025-05-03 00:01:24.031413 | orchestrator | 00:01:24.031 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-03 00:01:24.031452 | orchestrator | 00:01:24.031 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-03 00:01:24.031489 | orchestrator | 00:01:24.031 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-03 00:01:24.031524 | orchestrator | 00:01:24.031 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-03 00:01:24.031562 | orchestrator | 00:01:24.031 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-03 00:01:24.031600 | orchestrator | 00:01:24.031 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.031637 | orchestrator | 00:01:24.031 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.031644 | orchestrator | 00:01:24.031 STDOUT terraform:  + config_drive = true 2025-05-03 00:01:24.031667 | orchestrator | 00:01:24.031 STDOUT terraform:  + created = (known after apply) 2025-05-03 00:01:24.031717 | orchestrator | 00:01:24.031 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-03 00:01:24.031728 | orchestrator | 00:01:24.031 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-03 00:01:24.031751 | orchestrator | 00:01:24.031 STDOUT terraform:  + force_delete = false 2025-05-03 00:01:24.031812 | orchestrator | 00:01:24.031 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.031835 | orchestrator | 00:01:24.031 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.031868 | orchestrator | 00:01:24.031 STDOUT terraform:  + image_name = (known after apply) 2025-05-03 00:01:24.031890 | orchestrator | 00:01:24.031 STDOUT terraform:  + key_pair = "testbed" 2025-05-03 00:01:24.031922 | orchestrator | 00:01:24.031 STDOUT terraform:  + name = "testbed-node-3" 2025-05-03 00:01:24.031947 | orchestrator | 00:01:24.031 STDOUT terraform:  + power_state = "active" 2025-05-03 00:01:24.031982 | orchestrator | 00:01:24.031 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.032017 | orchestrator | 00:01:24.031 STDOUT terraform:  + security_groups = (known after apply) 2025-05-03 00:01:24.032048 | orchestrator | 00:01:24.032 STDOUT terraform:  + stop_before_destroy = false 2025-05-03 00:01:24.032076 | orchestrator | 00:01:24.032 STDOUT terraform:  + updated = (known after apply) 2025-05-03 00:01:24.032148 | orchestrator | 00:01:24.032 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-03 00:01:24.032164 | orchestrator | 00:01:24.032 STDOUT terraform:  + block_device { 2025-05-03 00:01:24.032205 | orchestrator | 00:01:24.032 STDOUT terraform:  + boot_index = 0 2025-05-03 00:01:24.032213 | orchestrator | 00:01:24.032 STDOUT terraform:  + delete_on_termination = false 2025-05-03 00:01:24.032247 | orchestrator | 00:01:24.032 STDOUT terraform:  + destination_type = "volume" 2025-05-03 00:01:24.032275 | orchestrator | 00:01:24.032 STDOUT terraform:  + multiattach = false 2025-05-03 00:01:24.032306 | orchestrator | 00:01:24.032 STDOUT terraform:  + source_type = "volume" 2025-05-03 00:01:24.032345 | orchestrator | 00:01:24.032 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.032353 | orchestrator | 00:01:24.032 STDOUT terraform:  } 2025-05-03 00:01:24.032370 | orchestrator | 00:01:24.032 STDOUT terraform:  + network { 2025-05-03 00:01:24.032391 | orchestrator | 00:01:24.032 STDOUT terraform:  + access_network = false 2025-05-03 00:01:24.032438 | orchestrator | 00:01:24.032 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-03 00:01:24.032447 | orchestrator | 00:01:24.032 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-03 00:01:24.032483 | orchestrator | 00:01:24.032 STDOUT terraform:  + mac = (known after apply) 2025-05-03 00:01:24.032530 | orchestrator | 00:01:24.032 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.032548 | orchestrator | 00:01:24.032 STDOUT terraform:  + port = (known after apply) 2025-05-03 00:01:24.032580 | orchestrator | 00:01:24.032 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.032588 | orchestrator | 00:01:24.032 STDOUT terraform:  } 2025-05-03 00:01:24.032603 | orchestrator | 00:01:24.032 STDOUT terraform:  } 2025-05-03 00:01:24.032664 | orchestrator | 00:01:24.032 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-03 00:01:24.032691 | orchestrator | 00:01:24.032 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-03 00:01:24.032727 | orchestrator | 00:01:24.032 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-03 00:01:24.032760 | orchestrator | 00:01:24.032 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-03 00:01:24.032823 | orchestrator | 00:01:24.032 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-03 00:01:24.032859 | orchestrator | 00:01:24.032 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.032883 | orchestrator | 00:01:24.032 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.032904 | orchestrator | 00:01:24.032 STDOUT terraform:  + config_drive = true 2025-05-03 00:01:24.032940 | orchestrator | 00:01:24.032 STDOUT terraform:  + created = (known after apply) 2025-05-03 00:01:24.032977 | orchestrator | 00:01:24.032 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-03 00:01:24.033006 | orchestrator | 00:01:24.032 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-03 00:01:24.033041 | orchestrator | 00:01:24.033 STDOUT terraform:  + force_delete = false 2025-05-03 00:01:24.033065 | orchestrator | 00:01:24.033 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.033100 | orchestrator | 00:01:24.033 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.033134 | orchestrator | 00:01:24.033 STDOUT terraform:  + image_name = (known after apply) 2025-05-03 00:01:24.033160 | orchestrator | 00:01:24.033 STDOUT terraform:  + key_pair = "testbed" 2025-05-03 00:01:24.033192 | orchestrator | 00:01:24.033 STDOUT terraform:  + name = "testbed-node-4" 2025-05-03 00:01:24.033217 | orchestrator | 00:01:24.033 STDOUT terraform:  + power_state = "active" 2025-05-03 00:01:24.033253 | orchestrator | 00:01:24.033 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.033288 | orchestrator | 00:01:24.033 STDOUT terraform:  + security_groups = (known after apply) 2025-05-03 00:01:24.033312 | orchestrator | 00:01:24.033 STDOUT terraform:  + stop_before_destroy = false 2025-05-03 00:01:24.033348 | orchestrator | 00:01:24.033 STDOUT terraform:  + updated = (known after apply) 2025-05-03 00:01:24.033398 | orchestrator | 00:01:24.033 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-03 00:01:24.033415 | orchestrator | 00:01:24.033 STDOUT terraform:  + block_device { 2025-05-03 00:01:24.033439 | orchestrator | 00:01:24.033 STDOUT terraform:  + boot_index = 0 2025-05-03 00:01:24.033467 | orchestrator | 00:01:24.033 STDOUT terraform:  + delete_on_termination = false 2025-05-03 00:01:24.033496 | orchestrator | 00:01:24.033 STDOUT terraform:  + destination_type = "volume" 2025-05-03 00:01:24.033525 | orchestrator | 00:01:24.033 STDOUT terraform:  + multiattach = false 2025-05-03 00:01:24.033555 | orchestrator | 00:01:24.033 STDOUT terraform:  + source_type = "volume" 2025-05-03 00:01:24.033594 | orchestrator | 00:01:24.033 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.033601 | orchestrator | 00:01:24.033 STDOUT terraform:  } 2025-05-03 00:01:24.033619 | orchestrator | 00:01:24.033 STDOUT terraform:  + network { 2025-05-03 00:01:24.033642 | orchestrator | 00:01:24.033 STDOUT terraform:  + access_network = false 2025-05-03 00:01:24.033674 | orchestrator | 00:01:24.033 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-03 00:01:24.033701 | orchestrator | 00:01:24.033 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-03 00:01:24.033741 | orchestrator | 00:01:24.033 STDOUT terraform:  + mac = (known after apply) 2025-05-03 00:01:24.033764 | orchestrator | 00:01:24.033 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.033819 | orchestrator | 00:01:24.033 STDOUT terraform:  + port = (known after apply) 2025-05-03 00:01:24.033841 | orchestrator | 00:01:24.033 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.033848 | orchestrator | 00:01:24.033 STDOUT terraform:  } 2025-05-03 00:01:24.033864 | orchestrator | 00:01:24.033 STDOUT terraform:  } 2025-05-03 00:01:24.033908 | orchestrator | 00:01:24.033 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-03 00:01:24.033951 | orchestrator | 00:01:24.033 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-03 00:01:24.033984 | orchestrator | 00:01:24.033 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-03 00:01:24.034035 | orchestrator | 00:01:24.033 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-03 00:01:24.034074 | orchestrator | 00:01:24.034 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-03 00:01:24.034110 | orchestrator | 00:01:24.034 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.034136 | orchestrator | 00:01:24.034 STDOUT terraform:  + availability_zone = "nova" 2025-05-03 00:01:24.034157 | orchestrator | 00:01:24.034 STDOUT terraform:  + config_drive = true 2025-05-03 00:01:24.034192 | orchestrator | 00:01:24.034 STDOUT terraform:  + created = (known after apply) 2025-05-03 00:01:24.034228 | orchestrator | 00:01:24.034 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-03 00:01:24.034259 | orchestrator | 00:01:24.034 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-03 00:01:24.034281 | orchestrator | 00:01:24.034 STDOUT terraform:  + force_delete = false 2025-05-03 00:01:24.034316 | orchestrator | 00:01:24.034 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.034352 | orchestrator | 00:01:24.034 STDOUT terraform:  + image_id = (known after apply) 2025-05-03 00:01:24.034437 | orchestrator | 00:01:24.034 STDOUT terraform:  + image_name = (known after apply) 2025-05-03 00:01:24.034463 | orchestrator | 00:01:24.034 STDOUT terraform:  + key_pair = "testbed" 2025-05-03 00:01:24.034495 | orchestrator | 00:01:24.034 STDOUT terraform:  + name = "testbed-node-5" 2025-05-03 00:01:24.034520 | orchestrator | 00:01:24.034 STDOUT terraform:  + power_state = "active" 2025-05-03 00:01:24.034556 | orchestrator | 00:01:24.034 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.034592 | orchestrator | 00:01:24.034 STDOUT terraform:  + security_groups = (known after apply) 2025-05-03 00:01:24.034615 | orchestrator | 00:01:24.034 STDOUT terraform:  + stop_before_destroy = false 2025-05-03 00:01:24.034651 | orchestrator | 00:01:24.034 STDOUT terraform:  + updated = (known after apply) 2025-05-03 00:01:24.034702 | orchestrator | 00:01:24.034 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-03 00:01:24.034719 | orchestrator | 00:01:24.034 STDOUT terraform:  + block_device { 2025-05-03 00:01:24.034747 | orchestrator | 00:01:24.034 STDOUT terraform:  + boot_index = 0 2025-05-03 00:01:24.034776 | orchestrator | 00:01:24.034 STDOUT terraform:  + delete_on_termination = false 2025-05-03 00:01:24.034819 | orchestrator | 00:01:24.034 STDOUT terraform:  + destination_type = "volume" 2025-05-03 00:01:24.034848 | orchestrator | 00:01:24.034 STDOUT terraform:  + multiattach = false 2025-05-03 00:01:24.034878 | orchestrator | 00:01:24.034 STDOUT terraform:  + source_type = "volume" 2025-05-03 00:01:24.034917 | orchestrator | 00:01:24.034 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.034924 | orchestrator | 00:01:24.034 STDOUT terraform:  } 2025-05-03 00:01:24.034943 | orchestrator | 00:01:24.034 STDOUT terraform:  + network { 2025-05-03 00:01:24.034966 | orchestrator | 00:01:24.034 STDOUT terraform:  + access_network = false 2025-05-03 00:01:24.034996 | orchestrator | 00:01:24.034 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-03 00:01:24.035027 | orchestrator | 00:01:24.034 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-03 00:01:24.035059 | orchestrator | 00:01:24.035 STDOUT terraform:  + mac = (known after apply) 2025-05-03 00:01:24.035091 | orchestrator | 00:01:24.035 STDOUT terraform:  + name = (known after apply) 2025-05-03 00:01:24.035122 | orchestrator | 00:01:24.035 STDOUT terraform:  + port = (known after apply) 2025-05-03 00:01:24.035153 | orchestrator | 00:01:24.035 STDOUT terraform:  + uuid = (known after apply) 2025-05-03 00:01:24.035160 | orchestrator | 00:01:24.035 STDOUT terraform:  } 2025-05-03 00:01:24.035177 | orchestrator | 00:01:24.035 STDOUT terraform:  } 2025-05-03 00:01:24.035215 | orchestrator | 00:01:24.035 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-03 00:01:24.035245 | orchestrator | 00:01:24.035 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-03 00:01:24.035273 | orchestrator | 00:01:24.035 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-03 00:01:24.035302 | orchestrator | 00:01:24.035 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.035317 | orchestrator | 00:01:24.035 STDOUT terraform:  + name = "testbed" 2025-05-03 00:01:24.035344 | orchestrator | 00:01:24.035 STDOUT terraform:  + private_key = (sensitive value) 2025-05-03 00:01:24.035372 | orchestrator | 00:01:24.035 STDOUT terraform:  + public_key = (known after apply) 2025-05-03 00:01:24.035401 | orchestrator | 00:01:24.035 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.035430 | orchestrator | 00:01:24.035 STDOUT terraform:  + user_id = (known after apply) 2025-05-03 00:01:24.035437 | orchestrator | 00:01:24.035 STDOUT terraform:  } 2025-05-03 00:01:24.035490 | orchestrator | 00:01:24.035 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-03 00:01:24.035537 | orchestrator | 00:01:24.035 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.035566 | orchestrator | 00:01:24.035 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.035594 | orchestrator | 00:01:24.035 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.035623 | orchestrator | 00:01:24.035 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.035651 | orchestrator | 00:01:24.035 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.035679 | orchestrator | 00:01:24.035 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.035687 | orchestrator | 00:01:24.035 STDOUT terraform:  } 2025-05-03 00:01:24.035739 | orchestrator | 00:01:24.035 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-03 00:01:24.035797 | orchestrator | 00:01:24.035 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.035841 | orchestrator | 00:01:24.035 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.035871 | orchestrator | 00:01:24.035 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.035899 | orchestrator | 00:01:24.035 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.035928 | orchestrator | 00:01:24.035 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.035960 | orchestrator | 00:01:24.035 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.035968 | orchestrator | 00:01:24.035 STDOUT terraform:  } 2025-05-03 00:01:24.036016 | orchestrator | 00:01:24.035 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-03 00:01:24.036064 | orchestrator | 00:01:24.036 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.036094 | orchestrator | 00:01:24.036 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.036124 | orchestrator | 00:01:24.036 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.036151 | orchestrator | 00:01:24.036 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.036181 | orchestrator | 00:01:24.036 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.036209 | orchestrator | 00:01:24.036 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.036221 | orchestrator | 00:01:24.036 STDOUT terraform:  } 2025-05-03 00:01:24.036268 | orchestrator | 00:01:24.036 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-03 00:01:24.036316 | orchestrator | 00:01:24.036 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.036346 | orchestrator | 00:01:24.036 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.036374 | orchestrator | 00:01:24.036 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.036403 | orchestrator | 00:01:24.036 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.036431 | orchestrator | 00:01:24.036 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.036471 | orchestrator | 00:01:24.036 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.036493 | orchestrator | 00:01:24.036 STDOUT terraform:  } 2025-05-03 00:01:24.036544 | orchestrator | 00:01:24.036 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-03 00:01:24.036593 | orchestrator | 00:01:24.036 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.036621 | orchestrator | 00:01:24.036 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.036652 | orchestrator | 00:01:24.036 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.036681 | orchestrator | 00:01:24.036 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.036710 | orchestrator | 00:01:24.036 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.036739 | orchestrator | 00:01:24.036 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.036755 | orchestrator | 00:01:24.036 STDOUT terraform:  } 2025-05-03 00:01:24.036821 | orchestrator | 00:01:24.036 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-03 00:01:24.036870 | orchestrator | 00:01:24.036 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.036897 | orchestrator | 00:01:24.036 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.036926 | orchestrator | 00:01:24.036 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.036955 | orchestrator | 00:01:24.036 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.036984 | orchestrator | 00:01:24.036 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.037013 | orchestrator | 00:01:24.036 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.037020 | orchestrator | 00:01:24.037 STDOUT terraform:  } 2025-05-03 00:01:24.037072 | orchestrator | 00:01:24.037 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-03 00:01:24.037120 | orchestrator | 00:01:24.037 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.037148 | orchestrator | 00:01:24.037 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.037177 | orchestrator | 00:01:24.037 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.037204 | orchestrator | 00:01:24.037 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.037234 | orchestrator | 00:01:24.037 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.037262 | orchestrator | 00:01:24.037 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.037269 | orchestrator | 00:01:24.037 STDOUT terraform:  } 2025-05-03 00:01:24.037321 | orchestrator | 00:01:24.037 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-03 00:01:24.037368 | orchestrator | 00:01:24.037 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.037398 | orchestrator | 00:01:24.037 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.037426 | orchestrator | 00:01:24.037 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.037454 | orchestrator | 00:01:24.037 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.037486 | orchestrator | 00:01:24.037 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.037511 | orchestrator | 00:01:24.037 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.037518 | orchestrator | 00:01:24.037 STDOUT terraform:  } 2025-05-03 00:01:24.037569 | orchestrator | 00:01:24.037 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-03 00:01:24.037618 | orchestrator | 00:01:24.037 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.037646 | orchestrator | 00:01:24.037 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.037675 | orchestrator | 00:01:24.037 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.037711 | orchestrator | 00:01:24.037 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.037731 | orchestrator | 00:01:24.037 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.037759 | orchestrator | 00:01:24.037 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.037766 | orchestrator | 00:01:24.037 STDOUT terraform:  } 2025-05-03 00:01:24.037850 | orchestrator | 00:01:24.037 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-05-03 00:01:24.037899 | orchestrator | 00:01:24.037 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.037928 | orchestrator | 00:01:24.037 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.037957 | orchestrator | 00:01:24.037 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.037984 | orchestrator | 00:01:24.037 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.038033 | orchestrator | 00:01:24.037 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.038056 | orchestrator | 00:01:24.038 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.038064 | orchestrator | 00:01:24.038 STDOUT terraform:  } 2025-05-03 00:01:24.038117 | orchestrator | 00:01:24.038 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-05-03 00:01:24.038164 | orchestrator | 00:01:24.038 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.038192 | orchestrator | 00:01:24.038 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.038221 | orchestrator | 00:01:24.038 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.038250 | orchestrator | 00:01:24.038 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.038279 | orchestrator | 00:01:24.038 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.038308 | orchestrator | 00:01:24.038 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.038315 | orchestrator | 00:01:24.038 STDOUT terraform:  } 2025-05-03 00:01:24.038368 | orchestrator | 00:01:24.038 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-05-03 00:01:24.038417 | orchestrator | 00:01:24.038 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.038446 | orchestrator | 00:01:24.038 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.038475 | orchestrator | 00:01:24.038 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.038503 | orchestrator | 00:01:24.038 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.038532 | orchestrator | 00:01:24.038 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.038563 | orchestrator | 00:01:24.038 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.038571 | orchestrator | 00:01:24.038 STDOUT terraform:  } 2025-05-03 00:01:24.038625 | orchestrator | 00:01:24.038 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-05-03 00:01:24.038672 | orchestrator | 00:01:24.038 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.038700 | orchestrator | 00:01:24.038 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.038728 | orchestrator | 00:01:24.038 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.038758 | orchestrator | 00:01:24.038 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.038797 | orchestrator | 00:01:24.038 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.038844 | orchestrator | 00:01:24.038 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.038853 | orchestrator | 00:01:24.038 STDOUT terraform:  } 2025-05-03 00:01:24.038905 | orchestrator | 00:01:24.038 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-05-03 00:01:24.038953 | orchestrator | 00:01:24.038 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.038981 | orchestrator | 00:01:24.038 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.039012 | orchestrator | 00:01:24.038 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.039038 | orchestrator | 00:01:24.039 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.039068 | orchestrator | 00:01:24.039 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.039096 | orchestrator | 00:01:24.039 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.039108 | orchestrator | 00:01:24.039 STDOUT terraform:  } 2025-05-03 00:01:24.039157 | orchestrator | 00:01:24.039 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-05-03 00:01:24.039204 | orchestrator | 00:01:24.039 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.039232 | orchestrator | 00:01:24.039 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.039261 | orchestrator | 00:01:24.039 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.039289 | orchestrator | 00:01:24.039 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.039318 | orchestrator | 00:01:24.039 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.039347 | orchestrator | 00:01:24.039 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.039357 | orchestrator | 00:01:24.039 STDOUT terraform:  } 2025-05-03 00:01:24.039405 | orchestrator | 00:01:24.039 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-05-03 00:01:24.039454 | orchestrator | 00:01:24.039 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.039484 | orchestrator | 00:01:24.039 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.039513 | orchestrator | 00:01:24.039 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.039541 | orchestrator | 00:01:24.039 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.039569 | orchestrator | 00:01:24.039 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.039597 | orchestrator | 00:01:24.039 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.039605 | orchestrator | 00:01:24.039 STDOUT terraform:  } 2025-05-03 00:01:24.039657 | orchestrator | 00:01:24.039 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-05-03 00:01:24.039706 | orchestrator | 00:01:24.039 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.039734 | orchestrator | 00:01:24.039 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.039765 | orchestrator | 00:01:24.039 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.039812 | orchestrator | 00:01:24.039 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.039841 | orchestrator | 00:01:24.039 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.039869 | orchestrator | 00:01:24.039 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.039876 | orchestrator | 00:01:24.039 STDOUT terraform:  } 2025-05-03 00:01:24.039929 | orchestrator | 00:01:24.039 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-05-03 00:01:24.039977 | orchestrator | 00:01:24.039 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-03 00:01:24.040006 | orchestrator | 00:01:24.039 STDOUT terraform:  + device = (known after apply) 2025-05-03 00:01:24.040034 | orchestrator | 00:01:24.040 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.040063 | orchestrator | 00:01:24.040 STDOUT terraform:  + instance_id = (known after apply) 2025-05-03 00:01:24.040092 | orchestrator | 00:01:24.040 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.040120 | orchestrator | 00:01:24.040 STDOUT terraform:  + volume_id = (known after apply) 2025-05-03 00:01:24.040127 | orchestrator | 00:01:24.040 STDOUT terraform:  } 2025-05-03 00:01:24.040185 | orchestrator | 00:01:24.040 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-03 00:01:24.040241 | orchestrator | 00:01:24.040 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-03 00:01:24.040269 | orchestrator | 00:01:24.040 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-03 00:01:24.040297 | orchestrator | 00:01:24.040 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-03 00:01:24.040326 | orchestrator | 00:01:24.040 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.040355 | orchestrator | 00:01:24.040 STDOUT terraform:  + port_id = (known after apply) 2025-05-03 00:01:24.040387 | orchestrator | 00:01:24.040 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.040394 | orchestrator | 00:01:24.040 STDOUT terraform:  } 2025-05-03 00:01:24.040442 | orchestrator | 00:01:24.040 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-03 00:01:24.040487 | orchestrator | 00:01:24.040 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-03 00:01:24.040512 | orchestrator | 00:01:24.040 STDOUT terraform:  + address = (known after apply) 2025-05-03 00:01:24.040539 | orchestrator | 00:01:24.040 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.040563 | orchestrator | 00:01:24.040 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-03 00:01:24.040588 | orchestrator | 00:01:24.040 STDOUT terraform:  + dns_name = (known after apply) 2025-05-03 00:01:24.040614 | orchestrator | 00:01:24.040 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-03 00:01:24.040639 | orchestrator | 00:01:24.040 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.040661 | orchestrator | 00:01:24.040 STDOUT terraform:  + pool = "public" 2025-05-03 00:01:24.040685 | orchestrator | 00:01:24.040 STDOUT terraform:  + port_id = (known after apply) 2025-05-03 00:01:24.040714 | orchestrator | 00:01:24.040 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.040738 | orchestrator | 00:01:24.040 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.040763 | orchestrator | 00:01:24.040 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.040771 | orchestrator | 00:01:24.040 STDOUT terraform:  } 2025-05-03 00:01:24.040836 | orchestrator | 00:01:24.040 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-03 00:01:24.040879 | orchestrator | 00:01:24.040 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-03 00:01:24.040917 | orchestrator | 00:01:24.040 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.040954 | orchestrator | 00:01:24.040 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.040977 | orchestrator | 00:01:24.040 STDOUT terraform:  + availability_zone_hints = [ 2025-05-03 00:01:24.040992 | orchestrator | 00:01:24.040 STDOUT terraform:  + "nova", 2025-05-03 00:01:24.041000 | orchestrator | 00:01:24.040 STDOUT terraform:  ] 2025-05-03 00:01:24.041040 | orchestrator | 00:01:24.040 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-03 00:01:24.041076 | orchestrator | 00:01:24.041 STDOUT terraform:  + external = (known after apply) 2025-05-03 00:01:24.041114 | orchestrator | 00:01:24.041 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.041151 | orchestrator | 00:01:24.041 STDOUT terraform:  + mtu = (known after apply) 2025-05-03 00:01:24.041191 | orchestrator | 00:01:24.041 STDOUT terraform:  + name = "net-testbed-management" 2025-05-03 00:01:24.041227 | orchestrator | 00:01:24.041 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-03 00:01:24.041264 | orchestrator | 00:01:24.041 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-03 00:01:24.041299 | orchestrator | 00:01:24.041 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.041336 | orchestrator | 00:01:24.041 STDOUT terraform:  + shared = (known after apply) 2025-05-03 00:01:24.041373 | orchestrator | 00:01:24.041 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.041408 | orchestrator | 00:01:24.041 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-03 00:01:24.041432 | orchestrator | 00:01:24.041 STDOUT terraform:  + segments (known after apply) 2025-05-03 00:01:24.041440 | orchestrator | 00:01:24.041 STDOUT terraform:  } 2025-05-03 00:01:24.041488 | orchestrator | 00:01:24.041 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-03 00:01:24.041534 | orchestrator | 00:01:24.041 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-03 00:01:24.041571 | orchestrator | 00:01:24.041 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.041609 | orchestrator | 00:01:24.041 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-03 00:01:24.041644 | orchestrator | 00:01:24.041 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-03 00:01:24.041680 | orchestrator | 00:01:24.041 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.041716 | orchestrator | 00:01:24.041 STDOUT terraform:  + device_id = (known after apply) 2025-05-03 00:01:24.041752 | orchestrator | 00:01:24.041 STDOUT terraform:  + device_owner = (known after apply) 2025-05-03 00:01:24.041799 | orchestrator | 00:01:24.041 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-03 00:01:24.041907 | orchestrator | 00:01:24.041 STDOUT terraform:  + dns_name = (known after apply) 2025-05-03 00:01:24.041949 | orchestrator | 00:01:24.041 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.041986 | orchestrator | 00:01:24.041 STDOUT terraform:  + mac_address = (known after apply) 2025-05-03 00:01:24.042049 | orchestrator | 00:01:24.041 STDOUT terraform:  + network_id = (known after apply) 2025-05-03 00:01:24.042081 | orchestrator | 00:01:24.042 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-03 00:01:24.042119 | orchestrator | 00:01:24.042 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-03 00:01:24.042158 | orchestrator | 00:01:24.042 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.042227 | orchestrator | 00:01:24.042 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-03 00:01:24.042272 | orchestrator | 00:01:24.042 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.042280 | orchestrator | 00:01:24.042 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.042315 | orchestrator | 00:01:24.042 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-03 00:01:24.042323 | orchestrator | 00:01:24.042 STDOUT terraform:  } 2025-05-03 00:01:24.042347 | orchestrator | 00:01:24.042 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.042376 | orchestrator | 00:01:24.042 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-03 00:01:24.042383 | orchestrator | 00:01:24.042 STDOUT terraform:  } 2025-05-03 00:01:24.042409 | orchestrator | 00:01:24.042 STDOUT terraform:  + binding (known after apply) 2025-05-03 00:01:24.042416 | orchestrator | 00:01:24.042 STDOUT terraform:  + fixed_ip { 2025-05-03 00:01:24.042446 | orchestrator | 00:01:24.042 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-03 00:01:24.042475 | orchestrator | 00:01:24.042 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.042483 | orchestrator | 00:01:24.042 STDOUT terraform:  } 2025-05-03 00:01:24.042498 | orchestrator | 00:01:24.042 STDOUT terraform:  } 2025-05-03 00:01:24.042545 | orchestrator | 00:01:24.042 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-03 00:01:24.042591 | orchestrator | 00:01:24.042 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-03 00:01:24.042626 | orchestrator | 00:01:24.042 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.042663 | orchestrator | 00:01:24.042 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-03 00:01:24.042698 | orchestrator | 00:01:24.042 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-03 00:01:24.042735 | orchestrator | 00:01:24.042 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.042772 | orchestrator | 00:01:24.042 STDOUT terraform:  + device_id = (known after apply) 2025-05-03 00:01:24.042830 | orchestrator | 00:01:24.042 STDOUT terraform:  + device_owner = (known after apply) 2025-05-03 00:01:24.042866 | orchestrator | 00:01:24.042 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-03 00:01:24.042926 | orchestrator | 00:01:24.042 STDOUT terraform:  + dns_name = (known after apply) 2025-05-03 00:01:24.042965 | orchestrator | 00:01:24.042 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.043002 | orchestrator | 00:01:24.042 STDOUT terraform:  + mac_address = (known after apply) 2025-05-03 00:01:24.043040 | orchestrator | 00:01:24.042 STDOUT terraform:  + network_id = (known after apply) 2025-05-03 00:01:24.043075 | orchestrator | 00:01:24.043 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-03 00:01:24.043113 | orchestrator | 00:01:24.043 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-03 00:01:24.043149 | orchestrator | 00:01:24.043 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.043185 | orchestrator | 00:01:24.043 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-03 00:01:24.043221 | orchestrator | 00:01:24.043 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.043241 | orchestrator | 00:01:24.043 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.043272 | orchestrator | 00:01:24.043 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-03 00:01:24.043279 | orchestrator | 00:01:24.043 STDOUT terraform:  } 2025-05-03 00:01:24.043302 | orchestrator | 00:01:24.043 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.043332 | orchestrator | 00:01:24.043 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-03 00:01:24.043339 | orchestrator | 00:01:24.043 STDOUT terraform:  } 2025-05-03 00:01:24.043364 | orchestrator | 00:01:24.043 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.043393 | orchestrator | 00:01:24.043 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-03 00:01:24.043400 | orchestrator | 00:01:24.043 STDOUT terraform:  } 2025-05-03 00:01:24.043423 | orchestrator | 00:01:24.043 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.043452 | orchestrator | 00:01:24.043 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-03 00:01:24.043459 | orchestrator | 00:01:24.043 STDOUT terraform:  } 2025-05-03 00:01:24.043486 | orchestrator | 00:01:24.043 STDOUT terraform:  + binding (known after apply) 2025-05-03 00:01:24.043493 | orchestrator | 00:01:24.043 STDOUT terraform:  + fixed_ip { 2025-05-03 00:01:24.043522 | orchestrator | 00:01:24.043 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-03 00:01:24.043551 | orchestrator | 00:01:24.043 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.043558 | orchestrator | 00:01:24.043 STDOUT terraform:  } 2025-05-03 00:01:24.043575 | orchestrator | 00:01:24.043 STDOUT terraform:  } 2025-05-03 00:01:24.043622 | orchestrator | 00:01:24.043 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-03 00:01:24.043668 | orchestrator | 00:01:24.043 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-03 00:01:24.043705 | orchestrator | 00:01:24.043 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.043741 | orchestrator | 00:01:24.043 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-03 00:01:24.043776 | orchestrator | 00:01:24.043 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-03 00:01:24.043828 | orchestrator | 00:01:24.043 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.043860 | orchestrator | 00:01:24.043 STDOUT terraform:  + device_id = (known after apply) 2025-05-03 00:01:24.043896 | orchestrator | 00:01:24.043 STDOUT terraform:  + device_owner = (known after apply) 2025-05-03 00:01:24.043931 | orchestrator | 00:01:24.043 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-03 00:01:24.043970 | orchestrator | 00:01:24.043 STDOUT terraform:  + dns_name = (known after apply) 2025-05-03 00:01:24.044007 | orchestrator | 00:01:24.043 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.044045 | orchestrator | 00:01:24.044 STDOUT terraform:  + mac_address = (known after apply) 2025-05-03 00:01:24.044080 | orchestrator | 00:01:24.044 STDOUT terraform:  + network_id = (known after apply) 2025-05-03 00:01:24.044115 | orchestrator | 00:01:24.044 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-03 00:01:24.044151 | orchestrator | 00:01:24.044 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-03 00:01:24.044188 | orchestrator | 00:01:24.044 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.044223 | orchestrator | 00:01:24.044 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-03 00:01:24.044259 | orchestrator | 00:01:24.044 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.044278 | orchestrator | 00:01:24.044 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.044308 | orchestrator | 00:01:24.044 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-03 00:01:24.044316 | orchestrator | 00:01:24.044 STDOUT terraform:  } 2025-05-03 00:01:24.044337 | orchestrator | 00:01:24.044 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.044368 | orchestrator | 00:01:24.044 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-03 00:01:24.044376 | orchestrator | 00:01:24.044 STDOUT terraform:  } 2025-05-03 00:01:24.044399 | orchestrator | 00:01:24.044 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.044428 | orchestrator | 00:01:24.044 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-03 00:01:24.044435 | orchestrator | 00:01:24.044 STDOUT terraform:  } 2025-05-03 00:01:24.044457 | orchestrator | 00:01:24.044 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.044486 | orchestrator | 00:01:24.044 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-03 00:01:24.044493 | orchestrator | 00:01:24.044 STDOUT terraform:  } 2025-05-03 00:01:24.044520 | orchestrator | 00:01:24.044 STDOUT terraform:  + binding (known after apply) 2025-05-03 00:01:24.044536 | orchestrator | 00:01:24.044 STDOUT terraform:  + fixed_ip { 2025-05-03 00:01:24.044561 | orchestrator | 00:01:24.044 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-03 00:01:24.044591 | orchestrator | 00:01:24.044 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.044599 | orchestrator | 00:01:24.044 STDOUT terraform:  } 2025-05-03 00:01:24.044614 | orchestrator | 00:01:24.044 STDOUT terraform:  } 2025-05-03 00:01:24.044661 | orchestrator | 00:01:24.044 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-03 00:01:24.044707 | orchestrator | 00:01:24.044 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-03 00:01:24.044742 | orchestrator | 00:01:24.044 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.044778 | orchestrator | 00:01:24.044 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-03 00:01:24.044831 | orchestrator | 00:01:24.044 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-03 00:01:24.044867 | orchestrator | 00:01:24.044 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.044903 | orchestrator | 00:01:24.044 STDOUT terraform:  + device_id = (known after apply) 2025-05-03 00:01:24.044940 | orchestrator | 00:01:24.044 STDOUT terraform:  + device_owner = (known after apply) 2025-05-03 00:01:24.044976 | orchestrator | 00:01:24.044 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-03 00:01:24.045012 | orchestrator | 00:01:24.044 STDOUT terraform:  + dns_name = (known after apply) 2025-05-03 00:01:24.045050 | orchestrator | 00:01:24.045 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.045085 | orchestrator | 00:01:24.045 STDOUT terraform:  + mac_address = (known after apply) 2025-05-03 00:01:24.045121 | orchestrator | 00:01:24.045 STDOUT terraform:  + network_id = (known after apply) 2025-05-03 00:01:24.045157 | orchestrator | 00:01:24.045 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-03 00:01:24.045193 | orchestrator | 00:01:24.045 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-03 00:01:24.045245 | orchestrator | 00:01:24.045 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.045275 | orchestrator | 00:01:24.045 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-03 00:01:24.045311 | orchestrator | 00:01:24.045 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.045331 | orchestrator | 00:01:24.045 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.045360 | orchestrator | 00:01:24.045 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-03 00:01:24.045367 | orchestrator | 00:01:24.045 STDOUT terraform:  } 2025-05-03 00:01:24.045389 | orchestrator | 00:01:24.045 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.045419 | orchestrator | 00:01:24.045 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-03 00:01:24.045434 | orchestrator | 00:01:24.045 STDOUT terraform:  } 2025-05-03 00:01:24.045457 | orchestrator | 00:01:24.045 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.045485 | orchestrator | 00:01:24.045 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-03 00:01:24.045504 | orchestrator | 00:01:24.045 STDOUT terraform:  } 2025-05-03 00:01:24.045524 | orchestrator | 00:01:24.045 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.045553 | orchestrator | 00:01:24.045 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-03 00:01:24.045561 | orchestrator | 00:01:24.045 STDOUT terraform:  } 2025-05-03 00:01:24.045588 | orchestrator | 00:01:24.045 STDOUT terraform:  + binding (known after apply) 2025-05-03 00:01:24.045595 | orchestrator | 00:01:24.045 STDOUT terraform:  + fixed_ip { 2025-05-03 00:01:24.045627 | orchestrator | 00:01:24.045 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-03 00:01:24.045655 | orchestrator | 00:01:24.045 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.045662 | orchestrator | 00:01:24.045 STDOUT terraform:  } 2025-05-03 00:01:24.045679 | orchestrator | 00:01:24.045 STDOUT terraform:  } 2025-05-03 00:01:24.045726 | orchestrator | 00:01:24.045 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-03 00:01:24.045770 | orchestrator | 00:01:24.045 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-03 00:01:24.045841 | orchestrator | 00:01:24.045 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.045877 | orchestrator | 00:01:24.045 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-03 00:01:24.045914 | orchestrator | 00:01:24.045 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-03 00:01:24.045951 | orchestrator | 00:01:24.045 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.045987 | orchestrator | 00:01:24.045 STDOUT terraform:  + device_id = (known after apply) 2025-05-03 00:01:24.046036 | orchestrator | 00:01:24.045 STDOUT terraform:  + device_owner = (known after apply) 2025-05-03 00:01:24.046071 | orchestrator | 00:01:24.046 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-03 00:01:24.046109 | orchestrator | 00:01:24.046 STDOUT terraform:  + dns_name = (known after apply) 2025-05-03 00:01:24.046145 | orchestrator | 00:01:24.046 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.046180 | orchestrator | 00:01:24.046 STDOUT terraform:  + mac_address = (known after apply) 2025-05-03 00:01:24.046217 | orchestrator | 00:01:24.046 STDOUT terraform:  + network_id = (known after apply) 2025-05-03 00:01:24.046252 | orchestrator | 00:01:24.046 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-03 00:01:24.046288 | orchestrator | 00:01:24.046 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-03 00:01:24.046325 | orchestrator | 00:01:24.046 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.046360 | orchestrator | 00:01:24.046 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-03 00:01:24.046396 | orchestrator | 00:01:24.046 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.046416 | orchestrator | 00:01:24.046 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.046444 | orchestrator | 00:01:24.046 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-03 00:01:24.046452 | orchestrator | 00:01:24.046 STDOUT terraform:  } 2025-05-03 00:01:24.046477 | orchestrator | 00:01:24.046 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.046506 | orchestrator | 00:01:24.046 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-03 00:01:24.046514 | orchestrator | 00:01:24.046 STDOUT terraform:  } 2025-05-03 00:01:24.046547 | orchestrator | 00:01:24.046 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.046554 | orchestrator | 00:01:24.046 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-03 00:01:24.046570 | orchestrator | 00:01:24.046 STDOUT terraform:  } 2025-05-03 00:01:24.046590 | orchestrator | 00:01:24.046 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.046620 | orchestrator | 00:01:24.046 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-03 00:01:24.046628 | orchestrator | 00:01:24.046 STDOUT terraform:  } 2025-05-03 00:01:24.046653 | orchestrator | 00:01:24.046 STDOUT terraform:  + binding (known after apply) 2025-05-03 00:01:24.046669 | orchestrator | 00:01:24.046 STDOUT terraform:  + fixed_ip { 2025-05-03 00:01:24.046696 | orchestrator | 00:01:24.046 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-03 00:01:24.046724 | orchestrator | 00:01:24.046 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.046732 | orchestrator | 00:01:24.046 STDOUT terraform:  } 2025-05-03 00:01:24.046748 | orchestrator | 00:01:24.046 STDOUT terraform:  } 2025-05-03 00:01:24.046830 | orchestrator | 00:01:24.046 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-03 00:01:24.046873 | orchestrator | 00:01:24.046 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-03 00:01:24.046909 | orchestrator | 00:01:24.046 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.046945 | orchestrator | 00:01:24.046 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-03 00:01:24.046980 | orchestrator | 00:01:24.046 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-03 00:01:24.047017 | orchestrator | 00:01:24.046 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.047054 | orchestrator | 00:01:24.047 STDOUT terraform:  + device_id = (known after apply) 2025-05-03 00:01:24.047089 | orchestrator | 00:01:24.047 STDOUT terraform:  + device_owner = (known after apply) 2025-05-03 00:01:24.047125 | orchestrator | 00:01:24.047 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-03 00:01:24.047166 | orchestrator | 00:01:24.047 STDOUT terraform:  + dns_name = (known after apply) 2025-05-03 00:01:24.047200 | orchestrator | 00:01:24.047 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.047236 | orchestrator | 00:01:24.047 STDOUT terraform:  + mac_address = (known after apply) 2025-05-03 00:01:24.047271 | orchestrator | 00:01:24.047 STDOUT terraform:  + network_id = (known after apply) 2025-05-03 00:01:24.047307 | orchestrator | 00:01:24.047 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-03 00:01:24.047344 | orchestrator | 00:01:24.047 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-03 00:01:24.047381 | orchestrator | 00:01:24.047 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.047418 | orchestrator | 00:01:24.047 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-03 00:01:24.047453 | orchestrator | 00:01:24.047 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.047473 | orchestrator | 00:01:24.047 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.047502 | orchestrator | 00:01:24.047 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-03 00:01:24.047513 | orchestrator | 00:01:24.047 STDOUT terraform:  } 2025-05-03 00:01:24.047528 | orchestrator | 00:01:24.047 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.047557 | orchestrator | 00:01:24.047 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-03 00:01:24.047564 | orchestrator | 00:01:24.047 STDOUT terraform:  } 2025-05-03 00:01:24.047586 | orchestrator | 00:01:24.047 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.047615 | orchestrator | 00:01:24.047 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-03 00:01:24.047622 | orchestrator | 00:01:24.047 STDOUT terraform:  } 2025-05-03 00:01:24.047645 | orchestrator | 00:01:24.047 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.047675 | orchestrator | 00:01:24.047 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-03 00:01:24.047682 | orchestrator | 00:01:24.047 STDOUT terraform:  } 2025-05-03 00:01:24.047710 | orchestrator | 00:01:24.047 STDOUT terraform:  + binding (known after apply) 2025-05-03 00:01:24.047717 | orchestrator | 00:01:24.047 STDOUT terraform:  + fixed_ip { 2025-05-03 00:01:24.047745 | orchestrator | 00:01:24.047 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-03 00:01:24.047774 | orchestrator | 00:01:24.047 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.047781 | orchestrator | 00:01:24.047 STDOUT terraform:  } 2025-05-03 00:01:24.047819 | orchestrator | 00:01:24.047 STDOUT terraform:  } 2025-05-03 00:01:24.047870 | orchestrator | 00:01:24.047 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-03 00:01:24.047914 | orchestrator | 00:01:24.047 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-03 00:01:24.047950 | orchestrator | 00:01:24.047 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.047986 | orchestrator | 00:01:24.047 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-03 00:01:24.048022 | orchestrator | 00:01:24.047 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-03 00:01:24.048062 | orchestrator | 00:01:24.048 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.048098 | orchestrator | 00:01:24.048 STDOUT terraform:  + device_id = (known after apply) 2025-05-03 00:01:24.048134 | orchestrator | 00:01:24.048 STDOUT terraform:  + device_owner = (known after apply) 2025-05-03 00:01:24.048169 | orchestrator | 00:01:24.048 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-03 00:01:24.048206 | orchestrator | 00:01:24.048 STDOUT terraform:  + dns_name = (known after apply) 2025-05-03 00:01:24.048243 | orchestrator | 00:01:24.048 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.048334 | orchestrator | 00:01:24.048 STDOUT terraform:  + mac_address = (known after apply) 2025-05-03 00:01:24.048343 | orchestrator | 00:01:24.048 STDOUT terraform:  + network_id = (known after apply) 2025-05-03 00:01:24.048350 | orchestrator | 00:01:24.048 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-03 00:01:24.048379 | orchestrator | 00:01:24.048 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-03 00:01:24.048415 | orchestrator | 00:01:24.048 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.048452 | orchestrator | 00:01:24.048 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-03 00:01:24.048488 | orchestrator | 00:01:24.048 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.048508 | orchestrator | 00:01:24.048 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.048538 | orchestrator | 00:01:24.048 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-03 00:01:24.048548 | orchestrator | 00:01:24.048 STDOUT terraform:  } 2025-05-03 00:01:24.048568 | orchestrator | 00:01:24.048 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.048597 | orchestrator | 00:01:24.048 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-03 00:01:24.048605 | orchestrator | 00:01:24.048 STDOUT terraform:  } 2025-05-03 00:01:24.048626 | orchestrator | 00:01:24.048 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.048656 | orchestrator | 00:01:24.048 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-03 00:01:24.048671 | orchestrator | 00:01:24.048 STDOUT terraform:  } 2025-05-03 00:01:24.048690 | orchestrator | 00:01:24.048 STDOUT terraform:  + allowed_address_pairs { 2025-05-03 00:01:24.048719 | orchestrator | 00:01:24.048 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-03 00:01:24.048727 | orchestrator | 00:01:24.048 STDOUT terraform:  } 2025-05-03 00:01:24.048753 | orchestrator | 00:01:24.048 STDOUT terraform:  + binding (known after apply) 2025-05-03 00:01:24.048762 | orchestrator | 00:01:24.048 STDOUT terraform:  + fixed_ip { 2025-05-03 00:01:24.048804 | orchestrator | 00:01:24.048 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-03 00:01:24.048835 | orchestrator | 00:01:24.048 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.048843 | orchestrator | 00:01:24.048 STDOUT terraform:  } 2025-05-03 00:01:24.048849 | orchestrator | 00:01:24.048 STDOUT terraform:  } 2025-05-03 00:01:24.048901 | orchestrator | 00:01:24.048 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-03 00:01:24.048952 | orchestrator | 00:01:24.048 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-03 00:01:24.048968 | orchestrator | 00:01:24.048 STDOUT terraform:  + force_destroy = false 2025-05-03 00:01:24.048998 | orchestrator | 00:01:24.048 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.049028 | orchestrator | 00:01:24.048 STDOUT terraform:  + port_id = (known after apply) 2025-05-03 00:01:24.049056 | orchestrator | 00:01:24.049 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.049087 | orchestrator | 00:01:24.049 STDOUT terraform:  + router_id = (known after apply) 2025-05-03 00:01:24.049116 | orchestrator | 00:01:24.049 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-03 00:01:24.049123 | orchestrator | 00:01:24.049 STDOUT terraform:  } 2025-05-03 00:01:24.049162 | orchestrator | 00:01:24.049 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-03 00:01:24.049198 | orchestrator | 00:01:24.049 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-03 00:01:24.049234 | orchestrator | 00:01:24.049 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-03 00:01:24.049271 | orchestrator | 00:01:24.049 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.049296 | orchestrator | 00:01:24.049 STDOUT terraform:  + availability_zone_hints = [ 2025-05-03 00:01:24.049311 | orchestrator | 00:01:24.049 STDOUT terraform:  + "nova", 2025-05-03 00:01:24.049318 | orchestrator | 00:01:24.049 STDOUT terraform:  ] 2025-05-03 00:01:24.049356 | orchestrator | 00:01:24.049 STDOUT terraform:  + distributed = (known after apply) 2025-05-03 00:01:24.049393 | orchestrator | 00:01:24.049 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-03 00:01:24.049442 | orchestrator | 00:01:24.049 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-03 00:01:24.049480 | orchestrator | 00:01:24.049 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.049510 | orchestrator | 00:01:24.049 STDOUT terraform:  + name = "testbed" 2025-05-03 00:01:24.049547 | orchestrator | 00:01:24.049 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.049584 | orchestrator | 00:01:24.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.049612 | orchestrator | 00:01:24.049 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-03 00:01:24.049620 | orchestrator | 00:01:24.049 STDOUT terraform:  } 2025-05-03 00:01:24.049677 | orchestrator | 00:01:24.049 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-03 00:01:24.049729 | orchestrator | 00:01:24.049 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-03 00:01:24.049751 | orchestrator | 00:01:24.049 STDOUT terraform:  + description = "ssh" 2025-05-03 00:01:24.049774 | orchestrator | 00:01:24.049 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.049825 | orchestrator | 00:01:24.049 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.049833 | orchestrator | 00:01:24.049 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.049857 | orchestrator | 00:01:24.049 STDOUT terraform:  + port_range_max = 22 2025-05-03 00:01:24.049877 | orchestrator | 00:01:24.049 STDOUT terraform:  + port_range_min = 22 2025-05-03 00:01:24.049899 | orchestrator | 00:01:24.049 STDOUT terraform:  + protocol = "tcp" 2025-05-03 00:01:24.049930 | orchestrator | 00:01:24.049 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.049960 | orchestrator | 00:01:24.049 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.049985 | orchestrator | 00:01:24.049 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-03 00:01:24.050058 | orchestrator | 00:01:24.049 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.050067 | orchestrator | 00:01:24.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.050073 | orchestrator | 00:01:24.050 STDOUT terraform:  } 2025-05-03 00:01:24.050119 | orchestrator | 00:01:24.050 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-03 00:01:24.050171 | orchestrator | 00:01:24.050 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-03 00:01:24.050195 | orchestrator | 00:01:24.050 STDOUT terraform:  + description = "wireguard" 2025-05-03 00:01:24.050219 | orchestrator | 00:01:24.050 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.050240 | orchestrator | 00:01:24.050 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.050273 | orchestrator | 00:01:24.050 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.050294 | orchestrator | 00:01:24.050 STDOUT terraform:  + port_range_max = 51820 2025-05-03 00:01:24.050315 | orchestrator | 00:01:24.050 STDOUT terraform:  + port_range_min = 51820 2025-05-03 00:01:24.050336 | orchestrator | 00:01:24.050 STDOUT terraform:  + protocol = "udp" 2025-05-03 00:01:24.050367 | orchestrator | 00:01:24.050 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.050397 | orchestrator | 00:01:24.050 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.050422 | orchestrator | 00:01:24.050 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-03 00:01:24.050451 | orchestrator | 00:01:24.050 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.050483 | orchestrator | 00:01:24.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.050490 | orchestrator | 00:01:24.050 STDOUT terraform:  } 2025-05-03 00:01:24.050548 | orchestrator | 00:01:24.050 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-03 00:01:24.050601 | orchestrator | 00:01:24.050 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-03 00:01:24.050624 | orchestrator | 00:01:24.050 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.050646 | orchestrator | 00:01:24.050 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.050677 | orchestrator | 00:01:24.050 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.050698 | orchestrator | 00:01:24.050 STDOUT terraform:  + protocol = "tcp" 2025-05-03 00:01:24.050730 | orchestrator | 00:01:24.050 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.050760 | orchestrator | 00:01:24.050 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.050802 | orchestrator | 00:01:24.050 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-03 00:01:24.050846 | orchestrator | 00:01:24.050 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.050879 | orchestrator | 00:01:24.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.050886 | orchestrator | 00:01:24.050 STDOUT terraform:  } 2025-05-03 00:01:24.050942 | orchestrator | 00:01:24.050 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-03 00:01:24.050995 | orchestrator | 00:01:24.050 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-03 00:01:24.051020 | orchestrator | 00:01:24.050 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.051041 | orchestrator | 00:01:24.051 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.051072 | orchestrator | 00:01:24.051 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.051093 | orchestrator | 00:01:24.051 STDOUT terraform:  + protocol = "udp" 2025-05-03 00:01:24.051124 | orchestrator | 00:01:24.051 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.051154 | orchestrator | 00:01:24.051 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.051187 | orchestrator | 00:01:24.051 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-03 00:01:24.051213 | orchestrator | 00:01:24.051 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.051244 | orchestrator | 00:01:24.051 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.051252 | orchestrator | 00:01:24.051 STDOUT terraform:  } 2025-05-03 00:01:24.051306 | orchestrator | 00:01:24.051 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-03 00:01:24.051360 | orchestrator | 00:01:24.051 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-03 00:01:24.051386 | orchestrator | 00:01:24.051 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.051407 | orchestrator | 00:01:24.051 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.051438 | orchestrator | 00:01:24.051 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.051459 | orchestrator | 00:01:24.051 STDOUT terraform:  + protocol = "icmp" 2025-05-03 00:01:24.051490 | orchestrator | 00:01:24.051 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.051519 | orchestrator | 00:01:24.051 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.051544 | orchestrator | 00:01:24.051 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-03 00:01:24.051574 | orchestrator | 00:01:24.051 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.051604 | orchestrator | 00:01:24.051 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.051612 | orchestrator | 00:01:24.051 STDOUT terraform:  } 2025-05-03 00:01:24.051666 | orchestrator | 00:01:24.051 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-03 00:01:24.051716 | orchestrator | 00:01:24.051 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-03 00:01:24.051741 | orchestrator | 00:01:24.051 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.051761 | orchestrator | 00:01:24.051 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.051812 | orchestrator | 00:01:24.051 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.051838 | orchestrator | 00:01:24.051 STDOUT terraform:  + protocol = "tcp" 2025-05-03 00:01:24.051868 | orchestrator | 00:01:24.051 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.051898 | orchestrator | 00:01:24.051 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.051923 | orchestrator | 00:01:24.051 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-03 00:01:24.051953 | orchestrator | 00:01:24.051 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.051985 | orchestrator | 00:01:24.051 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.051993 | orchestrator | 00:01:24.051 STDOUT terraform:  } 2025-05-03 00:01:24.052048 | orchestrator | 00:01:24.051 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-03 00:01:24.052098 | orchestrator | 00:01:24.052 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-03 00:01:24.052123 | orchestrator | 00:01:24.052 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.052144 | orchestrator | 00:01:24.052 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.052175 | orchestrator | 00:01:24.052 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.052196 | orchestrator | 00:01:24.052 STDOUT terraform:  + protocol = "udp" 2025-05-03 00:01:24.052227 | orchestrator | 00:01:24.052 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.052289 | orchestrator | 00:01:24.052 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.052314 | orchestrator | 00:01:24.052 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-03 00:01:24.052345 | orchestrator | 00:01:24.052 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.052374 | orchestrator | 00:01:24.052 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.052382 | orchestrator | 00:01:24.052 STDOUT terraform:  } 2025-05-03 00:01:24.052436 | orchestrator | 00:01:24.052 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-03 00:01:24.052488 | orchestrator | 00:01:24.052 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-03 00:01:24.052514 | orchestrator | 00:01:24.052 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.052536 | orchestrator | 00:01:24.052 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.052567 | orchestrator | 00:01:24.052 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.052588 | orchestrator | 00:01:24.052 STDOUT terraform:  + protocol = "icmp" 2025-05-03 00:01:24.052619 | orchestrator | 00:01:24.052 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.052650 | orchestrator | 00:01:24.052 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.052675 | orchestrator | 00:01:24.052 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-03 00:01:24.052705 | orchestrator | 00:01:24.052 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.052735 | orchestrator | 00:01:24.052 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.052743 | orchestrator | 00:01:24.052 STDOUT terraform:  } 2025-05-03 00:01:24.052820 | orchestrator | 00:01:24.052 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-03 00:01:24.052856 | orchestrator | 00:01:24.052 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-03 00:01:24.052886 | orchestrator | 00:01:24.052 STDOUT terraform:  + description = "vrrp" 2025-05-03 00:01:24.052910 | orchestrator | 00:01:24.052 STDOUT terraform:  + direction = "ingress" 2025-05-03 00:01:24.052931 | orchestrator | 00:01:24.052 STDOUT terraform:  + ethertype = "IPv4" 2025-05-03 00:01:24.052975 | orchestrator | 00:01:24.052 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.053008 | orchestrator | 00:01:24.052 STDOUT terraform:  + protocol = "112" 2025-05-03 00:01:24.053015 | orchestrator | 00:01:24.052 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.053044 | orchestrator | 00:01:24.053 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-03 00:01:24.053064 | orchestrator | 00:01:24.053 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-03 00:01:24.053095 | orchestrator | 00:01:24.053 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-03 00:01:24.053125 | orchestrator | 00:01:24.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.053132 | orchestrator | 00:01:24.053 STDOUT terraform:  } 2025-05-03 00:01:24.053184 | orchestrator | 00:01:24.053 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-03 00:01:24.053232 | orchestrator | 00:01:24.053 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-03 00:01:24.053259 | orchestrator | 00:01:24.053 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.053294 | orchestrator | 00:01:24.053 STDOUT terraform:  + description = "management security group" 2025-05-03 00:01:24.053322 | orchestrator | 00:01:24.053 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.053352 | orchestrator | 00:01:24.053 STDOUT terraform:  + name = "testbed-management" 2025-05-03 00:01:24.053380 | orchestrator | 00:01:24.053 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.053408 | orchestrator | 00:01:24.053 STDOUT terraform:  + stateful = (known after apply) 2025-05-03 00:01:24.053437 | orchestrator | 00:01:24.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.053444 | orchestrator | 00:01:24.053 STDOUT terraform:  } 2025-05-03 00:01:24.053492 | orchestrator | 00:01:24.053 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-03 00:01:24.053538 | orchestrator | 00:01:24.053 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-03 00:01:24.053565 | orchestrator | 00:01:24.053 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.053595 | orchestrator | 00:01:24.053 STDOUT terraform:  + description = "node security group" 2025-05-03 00:01:24.053625 | orchestrator | 00:01:24.053 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.053649 | orchestrator | 00:01:24.053 STDOUT terraform:  + name = "testbed-node" 2025-05-03 00:01:24.053677 | orchestrator | 00:01:24.053 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.053706 | orchestrator | 00:01:24.053 STDOUT terraform:  + stateful = (known after apply) 2025-05-03 00:01:24.053734 | orchestrator | 00:01:24.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.053741 | orchestrator | 00:01:24.053 STDOUT terraform:  } 2025-05-03 00:01:24.053804 | orchestrator | 00:01:24.053 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-03 00:01:24.053842 | orchestrator | 00:01:24.053 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-03 00:01:24.053873 | orchestrator | 00:01:24.053 STDOUT terraform:  + all_tags = (known after apply) 2025-05-03 00:01:24.053903 | orchestrator | 00:01:24.053 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-03 00:01:24.053925 | orchestrator | 00:01:24.053 STDOUT terraform:  + dns_nameservers = [ 2025-05-03 00:01:24.053943 | orchestrator | 00:01:24.053 STDOUT terraform:  + "8.8.8.8", 2025-05-03 00:01:24.053959 | orchestrator | 00:01:24.053 STDOUT terraform:  + "9.9.9.9", 2025-05-03 00:01:24.053966 | orchestrator | 00:01:24.053 STDOUT terraform:  ] 2025-05-03 00:01:24.053989 | orchestrator | 00:01:24.053 STDOUT terraform:  + enable_dhcp = true 2025-05-03 00:01:24.054041 | orchestrator | 00:01:24.053 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-03 00:01:24.054064 | orchestrator | 00:01:24.054 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.054084 | orchestrator | 00:01:24.054 STDOUT terraform:  + ip_version = 4 2025-05-03 00:01:24.054114 | orchestrator | 00:01:24.054 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-03 00:01:24.054144 | orchestrator | 00:01:24.054 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-03 00:01:24.054174 | orchestrator | 00:01:24.054 STDOUT terraform:  + name = "subnet-testbed-ma 2025-05-03 00:01:24.054220 | orchestrator | 00:01:24.054 STDOUT terraform: nagement" 2025-05-03 00:01:24.054250 | orchestrator | 00:01:24.054 STDOUT terraform:  + network_id = (known after apply) 2025-05-03 00:01:24.054271 | orchestrator | 00:01:24.054 STDOUT terraform:  + no_gateway = false 2025-05-03 00:01:24.054301 | orchestrator | 00:01:24.054 STDOUT terraform:  + region = (known after apply) 2025-05-03 00:01:24.054332 | orchestrator | 00:01:24.054 STDOUT terraform:  + service_types = (known after apply) 2025-05-03 00:01:24.054363 | orchestrator | 00:01:24.054 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-03 00:01:24.054382 | orchestrator | 00:01:24.054 STDOUT terraform:  + allocation_pool { 2025-05-03 00:01:24.054408 | orchestrator | 00:01:24.054 STDOUT terraform:  + end = "192.168.31.250" 2025-05-03 00:01:24.054432 | orchestrator | 00:01:24.054 STDOUT terraform:  + start = "192.168.31.200" 2025-05-03 00:01:24.054440 | orchestrator | 00:01:24.054 STDOUT terraform:  } 2025-05-03 00:01:24.054457 | orchestrator | 00:01:24.054 STDOUT terraform:  } 2025-05-03 00:01:24.054482 | orchestrator | 00:01:24.054 STDOUT terraform:  # terraform_data.image will be created 2025-05-03 00:01:24.054507 | orchestrator | 00:01:24.054 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-03 00:01:24.054532 | orchestrator | 00:01:24.054 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.054556 | orchestrator | 00:01:24.054 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-03 00:01:24.054580 | orchestrator | 00:01:24.054 STDOUT terraform:  + output = (known after apply) 2025-05-03 00:01:24.054587 | orchestrator | 00:01:24.054 STDOUT terraform:  } 2025-05-03 00:01:24.054620 | orchestrator | 00:01:24.054 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-03 00:01:24.054650 | orchestrator | 00:01:24.054 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-03 00:01:24.054676 | orchestrator | 00:01:24.054 STDOUT terraform:  + id = (known after apply) 2025-05-03 00:01:24.054698 | orchestrator | 00:01:24.054 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-03 00:01:24.054723 | orchestrator | 00:01:24.054 STDOUT terraform:  + output = (known after apply) 2025-05-03 00:01:24.054731 | orchestrator | 00:01:24.054 STDOUT terraform:  } 2025-05-03 00:01:24.054762 | orchestrator | 00:01:24.054 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-05-03 00:01:24.054769 | orchestrator | 00:01:24.054 STDOUT terraform: Changes to Outputs: 2025-05-03 00:01:24.054810 | orchestrator | 00:01:24.054 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-03 00:01:24.054848 | orchestrator | 00:01:24.054 STDOUT terraform:  + private_key = (sensitive value) 2025-05-03 00:01:24.261171 | orchestrator | 00:01:24.260 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-03 00:01:24.264140 | orchestrator | 00:01:24.263 STDOUT terraform: terraform_data.image: Creating... 2025-05-03 00:01:24.264194 | orchestrator | 00:01:24.263 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=8988d53b-b751-547f-7f9e-a6915c176ead] 2025-05-03 00:01:24.264205 | orchestrator | 00:01:24.263 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=b5fa29c0-af70-7ceb-7ea5-a2e401c35670] 2025-05-03 00:01:24.274242 | orchestrator | 00:01:24.274 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-03 00:01:24.281280 | orchestrator | 00:01:24.281 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-03 00:01:24.282733 | orchestrator | 00:01:24.282 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-03 00:01:24.283556 | orchestrator | 00:01:24.283 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-03 00:01:24.284107 | orchestrator | 00:01:24.284 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-03 00:01:24.284564 | orchestrator | 00:01:24.284 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-05-03 00:01:24.284613 | orchestrator | 00:01:24.284 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-03 00:01:24.285932 | orchestrator | 00:01:24.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-03 00:01:24.292539 | orchestrator | 00:01:24.291 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-03 00:01:24.723964 | orchestrator | 00:01:24.291 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-05-03 00:01:24.724109 | orchestrator | 00:01:24.723 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-03 00:01:24.738342 | orchestrator | 00:01:24.723 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-03 00:01:24.738455 | orchestrator | 00:01:24.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-05-03 00:01:24.739590 | orchestrator | 00:01:24.739 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-03 00:01:30.092733 | orchestrator | 00:01:30.092 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=b658f636-5956-4a1f-aaa3-02eb99dda951] 2025-05-03 00:01:30.100286 | orchestrator | 00:01:30.100 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-05-03 00:01:34.285147 | orchestrator | 00:01:34.284 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-03 00:01:34.286152 | orchestrator | 00:01:34.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-03 00:01:34.286305 | orchestrator | 00:01:34.286 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-03 00:01:34.286423 | orchestrator | 00:01:34.286 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-05-03 00:01:34.286550 | orchestrator | 00:01:34.286 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-03 00:01:34.293544 | orchestrator | 00:01:34.293 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-03 00:01:34.293652 | orchestrator | 00:01:34.293 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-05-03 00:01:34.739530 | orchestrator | 00:01:34.739 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-05-03 00:01:34.740818 | orchestrator | 00:01:34.740 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-03 00:01:34.861175 | orchestrator | 00:01:34.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=592c23d3-c323-4834-ad18-db1726824a9d] 2025-05-03 00:01:34.867134 | orchestrator | 00:01:34.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-05-03 00:01:34.891538 | orchestrator | 00:01:34.891 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=95821b4b-1055-4eda-a747-4e8f49c386b3] 2025-05-03 00:01:34.896887 | orchestrator | 00:01:34.896 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=8a5c9f8e-4062-4859-b774-db2eb35d9068] 2025-05-03 00:01:34.901149 | orchestrator | 00:01:34.900 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-05-03 00:01:34.901404 | orchestrator | 00:01:34.901 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-03 00:01:34.907788 | orchestrator | 00:01:34.907 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=bfd0f0a1-4f20-4970-92b4-aeacbd22f937] 2025-05-03 00:01:34.914085 | orchestrator | 00:01:34.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-05-03 00:01:34.917880 | orchestrator | 00:01:34.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=227e3005-1ee8-491d-9865-88581feda309] 2025-05-03 00:01:34.921706 | orchestrator | 00:01:34.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-03 00:01:34.936113 | orchestrator | 00:01:34.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=2ed49551-2949-4c54-ab15-9674e610f8a2] 2025-05-03 00:01:34.940074 | orchestrator | 00:01:34.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-03 00:01:34.957144 | orchestrator | 00:01:34.956 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 11s [id=494ae4e2-fb03-468f-bae0-ffa1e1c51b21] 2025-05-03 00:01:34.965449 | orchestrator | 00:01:34.961 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-05-03 00:01:34.985462 | orchestrator | 00:01:34.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=eb822f65-6f3e-4a32-952e-d8f6f7b2a5ab] 2025-05-03 00:01:34.991310 | orchestrator | 00:01:34.991 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-05-03 00:01:34.996086 | orchestrator | 00:01:34.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=6a5303a2-e8ba-422b-a7dc-ef5d91cab650] 2025-05-03 00:01:34.999690 | orchestrator | 00:01:34.999 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-03 00:01:35.346250 | orchestrator | 00:01:35.345 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-03 00:01:35.353217 | orchestrator | 00:01:35.353 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-03 00:01:40.102862 | orchestrator | 00:01:40.102 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-05-03 00:01:40.272620 | orchestrator | 00:01:40.272 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=626178b7-dd78-4872-a9d6-22f12232405d] 2025-05-03 00:01:40.280736 | orchestrator | 00:01:40.280 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-03 00:01:44.868086 | orchestrator | 00:01:44.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-05-03 00:01:44.901362 | orchestrator | 00:01:44.901 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-05-03 00:01:44.902289 | orchestrator | 00:01:44.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-03 00:01:44.914594 | orchestrator | 00:01:44.914 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-05-03 00:01:44.922894 | orchestrator | 00:01:44.922 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-03 00:01:44.941396 | orchestrator | 00:01:44.941 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-03 00:01:44.962578 | orchestrator | 00:01:44.962 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-05-03 00:01:44.992032 | orchestrator | 00:01:44.991 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-05-03 00:01:45.099826 | orchestrator | 00:01:45.099 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=eb0ccd8d-cf00-4d19-a5e4-10c9d40fdd4f] 2025-05-03 00:01:45.106488 | orchestrator | 00:01:45.106 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=8ee89ec0-10a2-40d4-b2a3-ab6963ecc84d] 2025-05-03 00:01:45.117088 | orchestrator | 00:01:45.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-03 00:01:45.117218 | orchestrator | 00:01:45.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=cf08c0e7-08ad-4d2d-8710-ce05fc114cf2] 2025-05-03 00:01:45.117565 | orchestrator | 00:01:45.117 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-03 00:01:45.126250 | orchestrator | 00:01:45.126 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-03 00:01:45.130483 | orchestrator | 00:01:45.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=7d8d47b0-f182-4852-ad97-fcd0be00a97a] 2025-05-03 00:01:45.135025 | orchestrator | 00:01:45.134 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-03 00:01:45.155657 | orchestrator | 00:01:45.155 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=60710ea4-1ba5-44da-b34f-cb4cc5f20e97] 2025-05-03 00:01:45.160978 | orchestrator | 00:01:45.160 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-03 00:01:45.183342 | orchestrator | 00:01:45.183 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=efb501f0-cdfc-4df2-8f60-0563271b3e1b] 2025-05-03 00:01:45.191626 | orchestrator | 00:01:45.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-03 00:01:45.206533 | orchestrator | 00:01:45.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=fbc89abf-41e4-403a-af47-fe4d6db2bcc8] 2025-05-03 00:01:45.206648 | orchestrator | 00:01:45.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=78b7c3f7-b361-43c7-bb55-097042834471] 2025-05-03 00:01:45.221966 | orchestrator | 00:01:45.221 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-03 00:01:45.222838 | orchestrator | 00:01:45.222 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-03 00:01:45.226670 | orchestrator | 00:01:45.226 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=3f474e56a496f157304b3d39f3b68ee7f732e29f] 2025-05-03 00:01:45.232967 | orchestrator | 00:01:45.232 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=4062ebf1ba54feca0ab76e301095d540b00f128c] 2025-05-03 00:01:45.354141 | orchestrator | 00:01:45.353 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-03 00:01:45.684257 | orchestrator | 00:01:45.683 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=9bf86a25-ceeb-48f2-8f39-7aac5f067121] 2025-05-03 00:01:50.281148 | orchestrator | 00:01:50.280 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-03 00:01:50.702740 | orchestrator | 00:01:50.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=5da3c3bd-eee1-4827-9013-ed3efdd154fa] 2025-05-03 00:01:50.957630 | orchestrator | 00:01:50.957 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=52d73991-86aa-4d50-9035-905abfb16fc0] 2025-05-03 00:01:50.971002 | orchestrator | 00:01:50.970 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-03 00:01:55.117319 | orchestrator | 00:01:55.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-03 00:01:55.127872 | orchestrator | 00:01:55.127 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-03 00:01:55.136610 | orchestrator | 00:01:55.136 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-03 00:01:55.162201 | orchestrator | 00:01:55.161 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-03 00:01:55.192452 | orchestrator | 00:01:55.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-03 00:01:55.464853 | orchestrator | 00:01:55.464 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=80e5cdca-5597-4b73-960a-f9a5fdfd6b66] 2025-05-03 00:01:55.483128 | orchestrator | 00:01:55.482 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=4f7b2d31-8f7a-47ec-8821-2cb523ca656c] 2025-05-03 00:01:55.506741 | orchestrator | 00:01:55.506 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=9d13b5c6-5c37-4969-9bdf-e1b816fbff4c] 2025-05-03 00:01:55.519100 | orchestrator | 00:01:55.518 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=c9ced28c-7e17-4b12-aa14-5845e36ffd1d] 2025-05-03 00:01:55.535833 | orchestrator | 00:01:55.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=d40fd274-36d2-4e35-9a5d-9edc2ca11b28] 2025-05-03 00:01:58.386860 | orchestrator | 00:01:58.386 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=a294d287-0d9c-4da8-993d-58c00316e130] 2025-05-03 00:01:58.392191 | orchestrator | 00:01:58.391 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-03 00:01:58.394518 | orchestrator | 00:01:58.394 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-03 00:01:58.398716 | orchestrator | 00:01:58.398 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-03 00:01:58.559623 | orchestrator | 00:01:58.559 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=333edce6-b9cc-4ff7-ada6-170b90f04b7b] 2025-05-03 00:01:58.568716 | orchestrator | 00:01:58.568 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-03 00:01:58.569125 | orchestrator | 00:01:58.568 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-03 00:01:58.570393 | orchestrator | 00:01:58.570 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-03 00:01:58.574715 | orchestrator | 00:01:58.574 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-03 00:01:58.575023 | orchestrator | 00:01:58.574 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-03 00:01:58.575052 | orchestrator | 00:01:58.574 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=c19d264b-d55f-46d6-a370-3586bed67ecf] 2025-05-03 00:01:58.576045 | orchestrator | 00:01:58.575 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-03 00:01:58.585274 | orchestrator | 00:01:58.585 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-03 00:01:58.586481 | orchestrator | 00:01:58.586 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-03 00:01:58.589209 | orchestrator | 00:01:58.589 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-03 00:01:58.700772 | orchestrator | 00:01:58.700 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=d3f8734d-8f27-431e-a0cf-042a17624389] 2025-05-03 00:01:58.707762 | orchestrator | 00:01:58.707 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=a3ff5da7-fac3-47f6-ae4f-0f78dc7c40eb] 2025-05-03 00:01:58.719414 | orchestrator | 00:01:58.719 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-03 00:01:58.719578 | orchestrator | 00:01:58.719 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-03 00:01:58.818999 | orchestrator | 00:01:58.818 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=7a175143-047a-4f68-b2cb-4eeddec92e95] 2025-05-03 00:01:58.838117 | orchestrator | 00:01:58.837 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-03 00:01:58.845608 | orchestrator | 00:01:58.845 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=81c417cf-170d-4c35-850d-3ec2aa46deaf] 2025-05-03 00:01:58.864919 | orchestrator | 00:01:58.864 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-03 00:01:58.940712 | orchestrator | 00:01:58.940 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=ecec64be-f152-4cf5-8cb4-caf5baa22f44] 2025-05-03 00:01:58.953840 | orchestrator | 00:01:58.953 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-03 00:01:58.958430 | orchestrator | 00:01:58.958 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=39fcb913-057e-487c-b6a1-54e01b669034] 2025-05-03 00:01:58.972431 | orchestrator | 00:01:58.972 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-03 00:01:59.065187 | orchestrator | 00:01:59.064 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=03ee67a8-c188-42eb-a005-2a54b01dffc9] 2025-05-03 00:01:59.076231 | orchestrator | 00:01:59.075 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-03 00:01:59.223006 | orchestrator | 00:01:59.222 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=9c5c796f-a98f-4fbc-a10c-3d88d3a9bfef] 2025-05-03 00:01:59.381110 | orchestrator | 00:01:59.380 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=e41e0c73-4f00-4d1e-8abc-134fc9e4dbda] 2025-05-03 00:02:04.214400 | orchestrator | 00:02:04.211 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=96c9e881-3660-4dc4-a3ec-53eb6058a993] 2025-05-03 00:02:04.539195 | orchestrator | 00:02:04.538 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=91955358-e201-4b2c-8c8b-2e6509b6713b] 2025-05-03 00:02:04.629241 | orchestrator | 00:02:04.628 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=62a35bd7-ea01-41c4-9ec5-c7cbc4196390] 2025-05-03 00:02:04.648690 | orchestrator | 00:02:04.648 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=3fbe2154-7fe2-499f-b5d3-d6c4070148af] 2025-05-03 00:02:04.804793 | orchestrator | 00:02:04.804 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=e3c5250f-8547-4ad5-95f5-3a0de9f7a42d] 2025-05-03 00:02:04.852949 | orchestrator | 00:02:04.852 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=3d0b3a5b-db6f-412b-946c-73733a276f3d] 2025-05-03 00:02:04.952102 | orchestrator | 00:02:04.951 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=30651cef-084b-42af-abc8-09d3b4fab1d9] 2025-05-03 00:02:05.493454 | orchestrator | 00:02:05.493 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=7ecfdca8-a0a3-4a47-acd3-c970ac3eb00f] 2025-05-03 00:02:05.523594 | orchestrator | 00:02:05.523 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-03 00:02:05.533490 | orchestrator | 00:02:05.533 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-03 00:02:05.534552 | orchestrator | 00:02:05.534 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-03 00:02:05.542615 | orchestrator | 00:02:05.542 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-03 00:02:05.549375 | orchestrator | 00:02:05.549 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-03 00:02:05.551047 | orchestrator | 00:02:05.550 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-03 00:02:05.556649 | orchestrator | 00:02:05.556 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-03 00:02:12.316209 | orchestrator | 00:02:12.315 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=5b6c467e-a26b-45f5-ac99-e74c4aadb041] 2025-05-03 00:02:12.328564 | orchestrator | 00:02:12.328 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-03 00:02:12.337743 | orchestrator | 00:02:12.337 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-03 00:02:12.337937 | orchestrator | 00:02:12.337 STDOUT terraform: local_file.inventory: Creating... 2025-05-03 00:02:12.342692 | orchestrator | 00:02:12.342 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=ebff4c3246df5c42bd3b06119f9b66df8f682bab] 2025-05-03 00:02:12.343226 | orchestrator | 00:02:12.342 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=954fb6ea541457c6f6ac609f2de6f639d23cf6f0] 2025-05-03 00:02:12.814228 | orchestrator | 00:02:12.813 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=5b6c467e-a26b-45f5-ac99-e74c4aadb041] 2025-05-03 00:02:15.536723 | orchestrator | 00:02:15.536 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-03 00:02:15.542793 | orchestrator | 00:02:15.542 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-03 00:02:15.550351 | orchestrator | 00:02:15.550 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-03 00:02:15.551223 | orchestrator | 00:02:15.551 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-03 00:02:15.551370 | orchestrator | 00:02:15.551 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-03 00:02:15.559714 | orchestrator | 00:02:15.559 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-03 00:02:25.537740 | orchestrator | 00:02:25.537 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-03 00:02:25.543857 | orchestrator | 00:02:25.543 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-03 00:02:25.551299 | orchestrator | 00:02:25.550 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-03 00:02:25.551416 | orchestrator | 00:02:25.551 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-03 00:02:25.551582 | orchestrator | 00:02:25.551 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-03 00:02:25.559812 | orchestrator | 00:02:25.559 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-03 00:02:25.891176 | orchestrator | 00:02:25.890 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=e7a26156-48ae-43bb-a565-abd398d12189] 2025-05-03 00:02:25.960608 | orchestrator | 00:02:25.960 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=526f5901-01a8-4b6e-820d-bed7456b88c3] 2025-05-03 00:02:26.001769 | orchestrator | 00:02:26.001 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=25e708f7-ce1f-4eb6-90bb-6f4d54bbc848] 2025-05-03 00:02:26.025013 | orchestrator | 00:02:26.024 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=073bccd6-783a-4920-bf0f-a64a16aef358] 2025-05-03 00:02:26.107292 | orchestrator | 00:02:26.106 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=96e2d5f5-e9ae-4d51-a3d3-2b7ce24951fa] 2025-05-03 00:02:35.552565 | orchestrator | 00:02:35.552 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-03 00:02:36.352689 | orchestrator | 00:02:36.352 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=6aca762b-f270-4c26-a5b3-6192a9017c83] 2025-05-03 00:02:36.365879 | orchestrator | 00:02:36.365 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-03 00:02:36.372459 | orchestrator | 00:02:36.372 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7224562209865899548] 2025-05-03 00:02:36.377531 | orchestrator | 00:02:36.377 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-03 00:02:36.385762 | orchestrator | 00:02:36.385 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-03 00:02:36.387350 | orchestrator | 00:02:36.387 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-03 00:02:36.392148 | orchestrator | 00:02:36.391 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-05-03 00:02:36.393470 | orchestrator | 00:02:36.393 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-05-03 00:02:36.400408 | orchestrator | 00:02:36.400 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-05-03 00:02:36.411359 | orchestrator | 00:02:36.411 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-05-03 00:02:36.416353 | orchestrator | 00:02:36.416 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-03 00:02:36.423531 | orchestrator | 00:02:36.423 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-05-03 00:02:36.427134 | orchestrator | 00:02:36.426 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-05-03 00:02:41.721560 | orchestrator | 00:02:41.716 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=25e708f7-ce1f-4eb6-90bb-6f4d54bbc848/592c23d3-c323-4834-ad18-db1726824a9d] 2025-05-03 00:02:41.724775 | orchestrator | 00:02:41.724 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 6s [id=073bccd6-783a-4920-bf0f-a64a16aef358/fbc89abf-41e4-403a-af47-fe4d6db2bcc8] 2025-05-03 00:02:41.731363 | orchestrator | 00:02:41.731 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-03 00:02:41.735173 | orchestrator | 00:02:41.735 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-03 00:02:41.742301 | orchestrator | 00:02:41.742 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=96e2d5f5-e9ae-4d51-a3d3-2b7ce24951fa/2ed49551-2949-4c54-ab15-9674e610f8a2] 2025-05-03 00:02:41.750258 | orchestrator | 00:02:41.750 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=526f5901-01a8-4b6e-820d-bed7456b88c3/efb501f0-cdfc-4df2-8f60-0563271b3e1b] 2025-05-03 00:02:41.751553 | orchestrator | 00:02:41.751 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=e7a26156-48ae-43bb-a565-abd398d12189/78b7c3f7-b361-43c7-bb55-097042834471] 2025-05-03 00:02:41.752899 | orchestrator | 00:02:41.752 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-05-03 00:02:41.754737 | orchestrator | 00:02:41.754 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 6s [id=6aca762b-f270-4c26-a5b3-6192a9017c83/eb0ccd8d-cf00-4d19-a5e4-10c9d40fdd4f] 2025-05-03 00:02:41.760665 | orchestrator | 00:02:41.760 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-03 00:02:41.767778 | orchestrator | 00:02:41.767 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-05-03 00:02:41.773928 | orchestrator | 00:02:41.767 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-05-03 00:02:41.773974 | orchestrator | 00:02:41.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=25e708f7-ce1f-4eb6-90bb-6f4d54bbc848/cf08c0e7-08ad-4d2d-8710-ce05fc114cf2] 2025-05-03 00:02:41.778784 | orchestrator | 00:02:41.778 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 6s [id=073bccd6-783a-4920-bf0f-a64a16aef358/494ae4e2-fb03-468f-bae0-ffa1e1c51b21] 2025-05-03 00:02:41.782520 | orchestrator | 00:02:41.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 6s [id=e7a26156-48ae-43bb-a565-abd398d12189/6a5303a2-e8ba-422b-a7dc-ef5d91cab650] 2025-05-03 00:02:41.787364 | orchestrator | 00:02:41.787 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-03 00:02:41.799740 | orchestrator | 00:02:41.799 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-03 00:02:41.800458 | orchestrator | 00:02:41.800 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=6aca762b-f270-4c26-a5b3-6192a9017c83/95821b4b-1055-4eda-a747-4e8f49c386b3] 2025-05-03 00:02:41.803737 | orchestrator | 00:02:41.803 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-03 00:02:47.067515 | orchestrator | 00:02:47.067 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=526f5901-01a8-4b6e-820d-bed7456b88c3/eb822f65-6f3e-4a32-952e-d8f6f7b2a5ab] 2025-05-03 00:02:47.078969 | orchestrator | 00:02:47.078 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=073bccd6-783a-4920-bf0f-a64a16aef358/60710ea4-1ba5-44da-b34f-cb4cc5f20e97] 2025-05-03 00:02:47.117146 | orchestrator | 00:02:47.116 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=25e708f7-ce1f-4eb6-90bb-6f4d54bbc848/626178b7-dd78-4872-a9d6-22f12232405d] 2025-05-03 00:02:47.124566 | orchestrator | 00:02:47.124 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=96e2d5f5-e9ae-4d51-a3d3-2b7ce24951fa/227e3005-1ee8-491d-9865-88581feda309] 2025-05-03 00:02:47.147643 | orchestrator | 00:02:47.147 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=6aca762b-f270-4c26-a5b3-6192a9017c83/7d8d47b0-f182-4852-ad97-fcd0be00a97a] 2025-05-03 00:02:47.150245 | orchestrator | 00:02:47.149 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=e7a26156-48ae-43bb-a565-abd398d12189/8a5c9f8e-4062-4859-b774-db2eb35d9068] 2025-05-03 00:02:47.152656 | orchestrator | 00:02:47.152 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=526f5901-01a8-4b6e-820d-bed7456b88c3/8ee89ec0-10a2-40d4-b2a3-ab6963ecc84d] 2025-05-03 00:02:47.200242 | orchestrator | 00:02:47.200 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=96e2d5f5-e9ae-4d51-a3d3-2b7ce24951fa/bfd0f0a1-4f20-4970-92b4-aeacbd22f937] 2025-05-03 00:02:51.804891 | orchestrator | 00:02:51.804 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-03 00:03:01.809510 | orchestrator | 00:03:01.809 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-03 00:03:02.469963 | orchestrator | 00:03:02.469 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=9738e4de-df15-4d1d-b87b-f8725beb4de0] 2025-05-03 00:03:02.484526 | orchestrator | 00:03:02.484 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-05-03 00:03:02.492877 | orchestrator | 00:03:02.484 STDOUT terraform: Outputs: 2025-05-03 00:03:02.493065 | orchestrator | 00:03:02.484 STDOUT terraform: manager_address = 2025-05-03 00:03:02.493097 | orchestrator | 00:03:02.484 STDOUT terraform: private_key = 2025-05-03 00:03:12.735740 | orchestrator | changed 2025-05-03 00:03:12.783623 | 2025-05-03 00:03:12.783807 | TASK [Fetch manager address] 2025-05-03 00:03:13.215621 | orchestrator | ok 2025-05-03 00:03:13.226876 | 2025-05-03 00:03:13.227007 | TASK [Set manager_host address] 2025-05-03 00:03:13.323431 | orchestrator | ok 2025-05-03 00:03:13.331805 | 2025-05-03 00:03:13.331908 | LOOP [Update ansible collections] 2025-05-03 00:03:14.218950 | orchestrator | changed 2025-05-03 00:03:15.061785 | orchestrator | changed 2025-05-03 00:03:15.088477 | 2025-05-03 00:03:15.088685 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-03 00:03:25.698652 | orchestrator | ok 2025-05-03 00:03:25.713270 | 2025-05-03 00:03:25.713444 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-03 00:04:25.767112 | orchestrator | ok 2025-05-03 00:04:25.778081 | 2025-05-03 00:04:25.778191 | TASK [Fetch manager ssh hostkey] 2025-05-03 00:04:26.880437 | orchestrator | Output suppressed because no_log was given 2025-05-03 00:04:26.891935 | 2025-05-03 00:04:26.892061 | TASK [Get ssh keypair from terraform environment] 2025-05-03 00:04:27.434002 | orchestrator | changed 2025-05-03 00:04:27.458151 | 2025-05-03 00:04:27.458280 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-03 00:04:27.514679 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-03 00:04:27.526155 | 2025-05-03 00:04:27.526314 | TASK [Run manager part 0] 2025-05-03 00:04:28.375415 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-03 00:04:28.417525 | orchestrator | 2025-05-03 00:04:30.078535 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-03 00:04:30.078587 | orchestrator | 2025-05-03 00:04:30.078614 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-03 00:04:30.078632 | orchestrator | ok: [testbed-manager] 2025-05-03 00:04:31.908317 | orchestrator | 2025-05-03 00:04:31.908430 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-03 00:04:31.908464 | orchestrator | 2025-05-03 00:04:31.908482 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:04:31.908510 | orchestrator | ok: [testbed-manager] 2025-05-03 00:04:32.576364 | orchestrator | 2025-05-03 00:04:32.576410 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-03 00:04:32.576427 | orchestrator | ok: [testbed-manager] 2025-05-03 00:04:32.619031 | orchestrator | 2025-05-03 00:04:32.619106 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-03 00:04:32.619138 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:04:32.650934 | orchestrator | 2025-05-03 00:04:32.650990 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-03 00:04:32.651010 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:04:32.682744 | orchestrator | 2025-05-03 00:04:32.682790 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-03 00:04:32.682803 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:04:32.706999 | orchestrator | 2025-05-03 00:04:32.707045 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-03 00:04:32.707059 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:04:32.735655 | orchestrator | 2025-05-03 00:04:32.735733 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-03 00:04:32.735765 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:04:32.768096 | orchestrator | 2025-05-03 00:04:32.768138 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-03 00:04:32.768150 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:04:32.804205 | orchestrator | 2025-05-03 00:04:32.804252 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-03 00:04:32.804269 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:04:33.558727 | orchestrator | 2025-05-03 00:04:33.558808 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-03 00:04:33.558839 | orchestrator | changed: [testbed-manager] 2025-05-03 00:07:30.317146 | orchestrator | 2025-05-03 00:07:30.317199 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-03 00:07:30.317221 | orchestrator | changed: [testbed-manager] 2025-05-03 00:08:47.318928 | orchestrator | 2025-05-03 00:08:47.319172 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-03 00:08:47.319221 | orchestrator | changed: [testbed-manager] 2025-05-03 00:09:13.016570 | orchestrator | 2025-05-03 00:09:13.016685 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-03 00:09:13.016728 | orchestrator | changed: [testbed-manager] 2025-05-03 00:09:21.681026 | orchestrator | 2025-05-03 00:09:21.681138 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-03 00:09:21.681172 | orchestrator | changed: [testbed-manager] 2025-05-03 00:09:21.729035 | orchestrator | 2025-05-03 00:09:21.729159 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-03 00:09:21.729223 | orchestrator | ok: [testbed-manager] 2025-05-03 00:09:22.523319 | orchestrator | 2025-05-03 00:09:22.523428 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-03 00:09:22.523466 | orchestrator | ok: [testbed-manager] 2025-05-03 00:09:23.249357 | orchestrator | 2025-05-03 00:09:23.249463 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-03 00:09:23.249505 | orchestrator | changed: [testbed-manager] 2025-05-03 00:09:30.390363 | orchestrator | 2025-05-03 00:09:30.390479 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-03 00:09:30.390516 | orchestrator | changed: [testbed-manager] 2025-05-03 00:09:36.246629 | orchestrator | 2025-05-03 00:09:36.246768 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-03 00:09:36.246823 | orchestrator | changed: [testbed-manager] 2025-05-03 00:09:38.921397 | orchestrator | 2025-05-03 00:09:38.921511 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-03 00:09:38.921546 | orchestrator | changed: [testbed-manager] 2025-05-03 00:09:40.620365 | orchestrator | 2025-05-03 00:09:40.620468 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-03 00:09:40.620505 | orchestrator | changed: [testbed-manager] 2025-05-03 00:09:41.825337 | orchestrator | 2025-05-03 00:09:41.825459 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-03 00:09:41.825498 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-03 00:09:41.871399 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-03 00:09:41.871508 | orchestrator | 2025-05-03 00:09:41.871550 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-03 00:09:41.871593 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-03 00:09:46.130397 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-03 00:09:46.130645 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-03 00:09:46.130669 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-03 00:09:46.130701 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-03 00:09:46.719519 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-03 00:09:46.719625 | orchestrator | 2025-05-03 00:09:46.719646 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-03 00:09:46.719677 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:07.517564 | orchestrator | 2025-05-03 00:10:07.517658 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-03 00:10:07.517687 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-03 00:10:09.785925 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-03 00:10:09.786077 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-03 00:10:09.786103 | orchestrator | 2025-05-03 00:10:09.786122 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-03 00:10:09.786192 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-03 00:10:11.300511 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-03 00:10:11.300618 | orchestrator | 2025-05-03 00:10:11.300638 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-03 00:10:11.300654 | orchestrator | 2025-05-03 00:10:11.300668 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:10:11.300698 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:11.348518 | orchestrator | 2025-05-03 00:10:11.348626 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-03 00:10:11.348663 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:11.417042 | orchestrator | 2025-05-03 00:10:11.417154 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-03 00:10:11.417186 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:12.130980 | orchestrator | 2025-05-03 00:10:12.131120 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-03 00:10:12.131160 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:12.917464 | orchestrator | 2025-05-03 00:10:12.917517 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-03 00:10:12.917537 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:14.403882 | orchestrator | 2025-05-03 00:10:14.403935 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-03 00:10:14.403954 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-03 00:10:15.830801 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-03 00:10:15.830915 | orchestrator | 2025-05-03 00:10:15.830937 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-03 00:10:15.830970 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:17.573647 | orchestrator | 2025-05-03 00:10:17.573750 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-03 00:10:17.573785 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-03 00:10:18.144582 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-03 00:10:18.144707 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-03 00:10:18.144727 | orchestrator | 2025-05-03 00:10:18.144743 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-03 00:10:18.144774 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:18.221935 | orchestrator | 2025-05-03 00:10:18.222099 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-03 00:10:18.222141 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:10:19.352234 | orchestrator | 2025-05-03 00:10:19.352302 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-03 00:10:19.352324 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:10:19.390012 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:19.390145 | orchestrator | 2025-05-03 00:10:19.390157 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-03 00:10:19.390176 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:10:19.426878 | orchestrator | 2025-05-03 00:10:19.426957 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-03 00:10:19.426978 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:10:19.465459 | orchestrator | 2025-05-03 00:10:19.465519 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-03 00:10:19.465537 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:10:19.515793 | orchestrator | 2025-05-03 00:10:19.515855 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-03 00:10:19.515875 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:10:20.220423 | orchestrator | 2025-05-03 00:10:20.220526 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-03 00:10:20.220564 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:21.638223 | orchestrator | 2025-05-03 00:10:21.638308 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-03 00:10:21.638326 | orchestrator | 2025-05-03 00:10:21.638342 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:10:21.638369 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:22.609154 | orchestrator | 2025-05-03 00:10:22.609259 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-03 00:10:22.609321 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:22.709832 | orchestrator | 2025-05-03 00:10:22.709948 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:10:22.709970 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-03 00:10:22.710196 | orchestrator | 2025-05-03 00:10:22.763670 | orchestrator | changed 2025-05-03 00:10:22.775628 | 2025-05-03 00:10:22.775738 | TASK [Point out that the log in on the manager is now possible] 2025-05-03 00:10:22.825215 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-03 00:10:22.835870 | 2025-05-03 00:10:22.835980 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-03 00:10:22.871002 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-03 00:10:22.880272 | 2025-05-03 00:10:22.880482 | TASK [Run manager part 1 + 2] 2025-05-03 00:10:23.720028 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-03 00:10:23.774562 | orchestrator | 2025-05-03 00:10:26.290100 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-03 00:10:26.290218 | orchestrator | 2025-05-03 00:10:26.290260 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:10:26.290302 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:26.326136 | orchestrator | 2025-05-03 00:10:26.326253 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-03 00:10:26.326292 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:10:26.370201 | orchestrator | 2025-05-03 00:10:26.370272 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-03 00:10:26.370291 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:26.408972 | orchestrator | 2025-05-03 00:10:26.409036 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-03 00:10:26.409053 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:26.474587 | orchestrator | 2025-05-03 00:10:26.474654 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-03 00:10:26.474670 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:26.537722 | orchestrator | 2025-05-03 00:10:26.537791 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-03 00:10:26.537809 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:26.579767 | orchestrator | 2025-05-03 00:10:26.579838 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-03 00:10:26.579856 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-03 00:10:27.301302 | orchestrator | 2025-05-03 00:10:27.301454 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-03 00:10:27.301490 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:27.354113 | orchestrator | 2025-05-03 00:10:27.354184 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-03 00:10:27.354205 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:10:28.726423 | orchestrator | 2025-05-03 00:10:28.726517 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-03 00:10:28.726556 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:29.316372 | orchestrator | 2025-05-03 00:10:29.316481 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-03 00:10:29.316518 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:30.439696 | orchestrator | 2025-05-03 00:10:30.439795 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-03 00:10:30.439828 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:43.001112 | orchestrator | 2025-05-03 00:10:43.001226 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-03 00:10:43.001262 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:43.670519 | orchestrator | 2025-05-03 00:10:43.670629 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-03 00:10:43.670663 | orchestrator | ok: [testbed-manager] 2025-05-03 00:10:43.725784 | orchestrator | 2025-05-03 00:10:43.725890 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-03 00:10:43.725926 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:10:44.701530 | orchestrator | 2025-05-03 00:10:44.701663 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-03 00:10:44.701713 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:45.632806 | orchestrator | 2025-05-03 00:10:45.632864 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-03 00:10:45.632884 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:46.213738 | orchestrator | 2025-05-03 00:10:46.213794 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-03 00:10:46.213812 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:46.253044 | orchestrator | 2025-05-03 00:10:46.253141 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-03 00:10:46.253166 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-03 00:10:48.470335 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-03 00:10:48.470454 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-03 00:10:48.470477 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-03 00:10:48.470509 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:57.463217 | orchestrator | 2025-05-03 00:10:57.463334 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-03 00:10:57.463373 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-03 00:10:58.500038 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-03 00:10:58.500175 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-03 00:10:58.500197 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-03 00:10:58.500215 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-03 00:10:58.500229 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-03 00:10:58.500244 | orchestrator | 2025-05-03 00:10:58.500259 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-03 00:10:58.500305 | orchestrator | changed: [testbed-manager] 2025-05-03 00:10:58.544806 | orchestrator | 2025-05-03 00:10:58.544913 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-03 00:10:58.544945 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:11:01.610196 | orchestrator | 2025-05-03 00:11:01.610294 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-03 00:11:01.610324 | orchestrator | changed: [testbed-manager] 2025-05-03 00:11:01.650776 | orchestrator | 2025-05-03 00:11:01.650848 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-03 00:11:01.650880 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:12:32.889889 | orchestrator | 2025-05-03 00:12:32.889937 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-03 00:12:32.889952 | orchestrator | changed: [testbed-manager] 2025-05-03 00:12:33.984414 | orchestrator | 2025-05-03 00:12:33.984464 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-03 00:12:33.984481 | orchestrator | ok: [testbed-manager] 2025-05-03 00:12:34.079709 | orchestrator | 2025-05-03 00:12:34.079784 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:12:34.079794 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-03 00:12:34.079800 | orchestrator | 2025-05-03 00:12:34.518011 | orchestrator | changed 2025-05-03 00:12:34.536417 | 2025-05-03 00:12:34.536600 | TASK [Reboot manager] 2025-05-03 00:12:36.118245 | orchestrator | changed 2025-05-03 00:12:36.137219 | 2025-05-03 00:12:36.137389 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-03 00:12:50.250714 | orchestrator | ok 2025-05-03 00:12:50.264188 | 2025-05-03 00:12:50.264342 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-03 00:13:50.313894 | orchestrator | ok 2025-05-03 00:13:50.324584 | 2025-05-03 00:13:50.324712 | TASK [Deploy manager + bootstrap nodes] 2025-05-03 00:13:52.629427 | orchestrator | 2025-05-03 00:13:52.632918 | orchestrator | # DEPLOY MANAGER 2025-05-03 00:13:52.632939 | orchestrator | 2025-05-03 00:13:52.632945 | orchestrator | + set -e 2025-05-03 00:13:52.632965 | orchestrator | + echo 2025-05-03 00:13:52.632972 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-03 00:13:52.632979 | orchestrator | + echo 2025-05-03 00:13:52.632988 | orchestrator | + cat /opt/manager-vars.sh 2025-05-03 00:13:52.633004 | orchestrator | export NUMBER_OF_NODES=6 2025-05-03 00:13:52.633114 | orchestrator | 2025-05-03 00:13:52.633122 | orchestrator | export CEPH_VERSION=reef 2025-05-03 00:13:52.633127 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-03 00:13:52.633133 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-03 00:13:52.633154 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-03 00:13:52.633159 | orchestrator | 2025-05-03 00:13:52.633165 | orchestrator | export ARA=false 2025-05-03 00:13:52.633170 | orchestrator | export TEMPEST=false 2025-05-03 00:13:52.633175 | orchestrator | export IS_ZUUL=true 2025-05-03 00:13:52.633180 | orchestrator | 2025-05-03 00:13:52.633185 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.136 2025-05-03 00:13:52.633191 | orchestrator | export EXTERNAL_API=false 2025-05-03 00:13:52.633196 | orchestrator | 2025-05-03 00:13:52.633201 | orchestrator | export IMAGE_USER=ubuntu 2025-05-03 00:13:52.633206 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-03 00:13:52.633212 | orchestrator | 2025-05-03 00:13:52.633217 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-03 00:13:52.633224 | orchestrator | 2025-05-03 00:13:52.634221 | orchestrator | + echo 2025-05-03 00:13:52.634230 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-03 00:13:52.634250 | orchestrator | ++ export INTERACTIVE=false 2025-05-03 00:13:52.634325 | orchestrator | ++ INTERACTIVE=false 2025-05-03 00:13:52.634332 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-03 00:13:52.634341 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-03 00:13:52.634348 | orchestrator | + source /opt/manager-vars.sh 2025-05-03 00:13:52.634355 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-03 00:13:52.634504 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-03 00:13:52.634514 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-03 00:13:52.634553 | orchestrator | ++ CEPH_VERSION=reef 2025-05-03 00:13:52.634559 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-03 00:13:52.634564 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-03 00:13:52.634572 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-03 00:13:52.634579 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-03 00:13:52.634835 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-03 00:13:52.634867 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-03 00:13:52.634873 | orchestrator | ++ export ARA=false 2025-05-03 00:13:52.634878 | orchestrator | ++ ARA=false 2025-05-03 00:13:52.634884 | orchestrator | ++ export TEMPEST=false 2025-05-03 00:13:52.634889 | orchestrator | ++ TEMPEST=false 2025-05-03 00:13:52.634894 | orchestrator | ++ export IS_ZUUL=true 2025-05-03 00:13:52.634899 | orchestrator | ++ IS_ZUUL=true 2025-05-03 00:13:52.634905 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.136 2025-05-03 00:13:52.634918 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.136 2025-05-03 00:13:52.634929 | orchestrator | ++ export EXTERNAL_API=false 2025-05-03 00:13:52.634974 | orchestrator | ++ EXTERNAL_API=false 2025-05-03 00:13:52.634980 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-03 00:13:52.634985 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-03 00:13:52.634990 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-03 00:13:52.634995 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-03 00:13:52.635002 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-03 00:13:52.635009 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-03 00:13:52.688396 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-03 00:13:52.688530 | orchestrator | + docker version 2025-05-03 00:13:52.932788 | orchestrator | Client: Docker Engine - Community 2025-05-03 00:13:52.936287 | orchestrator | Version: 26.1.4 2025-05-03 00:13:52.936409 | orchestrator | API version: 1.45 2025-05-03 00:13:52.936444 | orchestrator | Go version: go1.21.11 2025-05-03 00:13:52.936473 | orchestrator | Git commit: 5650f9b 2025-05-03 00:13:52.936499 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-03 00:13:52.936530 | orchestrator | OS/Arch: linux/amd64 2025-05-03 00:13:52.936557 | orchestrator | Context: default 2025-05-03 00:13:52.936584 | orchestrator | 2025-05-03 00:13:52.936611 | orchestrator | Server: Docker Engine - Community 2025-05-03 00:13:52.936637 | orchestrator | Engine: 2025-05-03 00:13:52.936662 | orchestrator | Version: 26.1.4 2025-05-03 00:13:52.936688 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-03 00:13:52.936715 | orchestrator | Go version: go1.21.11 2025-05-03 00:13:52.936732 | orchestrator | Git commit: de5c9cf 2025-05-03 00:13:52.936778 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-03 00:13:52.936793 | orchestrator | OS/Arch: linux/amd64 2025-05-03 00:13:52.936808 | orchestrator | Experimental: false 2025-05-03 00:13:52.936822 | orchestrator | containerd: 2025-05-03 00:13:52.936836 | orchestrator | Version: 1.7.27 2025-05-03 00:13:52.936850 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-03 00:13:52.936865 | orchestrator | runc: 2025-05-03 00:13:52.936879 | orchestrator | Version: 1.2.5 2025-05-03 00:13:52.936894 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-03 00:13:52.936908 | orchestrator | docker-init: 2025-05-03 00:13:52.936922 | orchestrator | Version: 0.19.0 2025-05-03 00:13:52.936937 | orchestrator | GitCommit: de40ad0 2025-05-03 00:13:52.936965 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-03 00:13:52.944950 | orchestrator | + set -e 2025-05-03 00:13:52.945002 | orchestrator | + source /opt/manager-vars.sh 2025-05-03 00:13:52.945081 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-03 00:13:52.945098 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-03 00:13:52.945113 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-03 00:13:52.945127 | orchestrator | ++ CEPH_VERSION=reef 2025-05-03 00:13:52.945175 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-03 00:13:52.945191 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-03 00:13:52.945206 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-03 00:13:52.945220 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-03 00:13:52.945234 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-03 00:13:52.945254 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-03 00:13:52.951859 | orchestrator | ++ export ARA=false 2025-05-03 00:13:52.951959 | orchestrator | ++ ARA=false 2025-05-03 00:13:52.951979 | orchestrator | ++ export TEMPEST=false 2025-05-03 00:13:52.951993 | orchestrator | ++ TEMPEST=false 2025-05-03 00:13:52.952008 | orchestrator | ++ export IS_ZUUL=true 2025-05-03 00:13:52.952022 | orchestrator | ++ IS_ZUUL=true 2025-05-03 00:13:52.952039 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.136 2025-05-03 00:13:52.952055 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.136 2025-05-03 00:13:52.952084 | orchestrator | ++ export EXTERNAL_API=false 2025-05-03 00:13:52.952099 | orchestrator | ++ EXTERNAL_API=false 2025-05-03 00:13:52.952113 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-03 00:13:52.952133 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-03 00:13:52.952177 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-03 00:13:52.952192 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-03 00:13:52.952206 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-03 00:13:52.952220 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-03 00:13:52.952234 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-03 00:13:52.952248 | orchestrator | ++ export INTERACTIVE=false 2025-05-03 00:13:52.952262 | orchestrator | ++ INTERACTIVE=false 2025-05-03 00:13:52.952276 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-03 00:13:52.952290 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-03 00:13:52.952304 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-03 00:13:52.952320 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-03 00:13:52.952351 | orchestrator | + set -e 2025-05-03 00:13:52.959243 | orchestrator | + VERSION=8.1.0 2025-05-03 00:13:52.959277 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-03 00:13:52.959309 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-03 00:13:52.964129 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-03 00:13:52.964195 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-03 00:13:52.968601 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-03 00:13:52.977498 | orchestrator | /opt/configuration ~ 2025-05-03 00:13:52.979895 | orchestrator | + set -e 2025-05-03 00:13:52.979922 | orchestrator | + pushd /opt/configuration 2025-05-03 00:13:52.979937 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-03 00:13:52.979958 | orchestrator | + source /opt/venv/bin/activate 2025-05-03 00:13:52.981093 | orchestrator | ++ deactivate nondestructive 2025-05-03 00:13:52.981259 | orchestrator | ++ '[' -n '' ']' 2025-05-03 00:13:52.981281 | orchestrator | ++ '[' -n '' ']' 2025-05-03 00:13:52.981325 | orchestrator | ++ hash -r 2025-05-03 00:13:52.981340 | orchestrator | ++ '[' -n '' ']' 2025-05-03 00:13:52.981360 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-03 00:13:52.981432 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-03 00:13:52.981450 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-03 00:13:52.981503 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-03 00:13:52.981579 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-03 00:13:52.981595 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-03 00:13:52.981615 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-03 00:13:52.981944 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-03 00:13:52.981976 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-03 00:13:52.981991 | orchestrator | ++ export PATH 2025-05-03 00:13:52.982011 | orchestrator | ++ '[' -n '' ']' 2025-05-03 00:13:53.989660 | orchestrator | ++ '[' -z '' ']' 2025-05-03 00:13:53.989789 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-03 00:13:53.989809 | orchestrator | ++ PS1='(venv) ' 2025-05-03 00:13:53.989824 | orchestrator | ++ export PS1 2025-05-03 00:13:53.989839 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-03 00:13:53.989853 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-03 00:13:53.989867 | orchestrator | ++ hash -r 2025-05-03 00:13:53.989883 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-03 00:13:53.989918 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-03 00:13:53.990486 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-03 00:13:53.991872 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-03 00:13:53.993239 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-03 00:13:53.994414 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-03 00:13:54.004349 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-05-03 00:13:54.005748 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-03 00:13:54.006924 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-03 00:13:54.008256 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-03 00:13:54.037250 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-03 00:13:54.038535 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-03 00:13:54.040060 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-03 00:13:54.041426 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-03 00:13:54.045327 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-03 00:13:54.239982 | orchestrator | ++ which gilt 2025-05-03 00:13:54.244330 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-03 00:13:54.489243 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-03 00:13:54.489370 | orchestrator | osism.cfg-generics: 2025-05-03 00:13:56.007591 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-03 00:13:56.007741 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-03 00:13:56.007908 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-03 00:13:56.007936 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-03 00:13:56.008055 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-03 00:13:56.872103 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-03 00:13:56.882584 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-03 00:13:57.312305 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-03 00:13:57.361372 | orchestrator | ~ 2025-05-03 00:13:57.362612 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-03 00:13:57.362657 | orchestrator | + deactivate 2025-05-03 00:13:57.362694 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-03 00:13:57.362712 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-03 00:13:57.362727 | orchestrator | + export PATH 2025-05-03 00:13:57.362741 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-03 00:13:57.362755 | orchestrator | + '[' -n '' ']' 2025-05-03 00:13:57.362769 | orchestrator | + hash -r 2025-05-03 00:13:57.362783 | orchestrator | + '[' -n '' ']' 2025-05-03 00:13:57.362797 | orchestrator | + unset VIRTUAL_ENV 2025-05-03 00:13:57.362811 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-03 00:13:57.362825 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-03 00:13:57.362841 | orchestrator | + unset -f deactivate 2025-05-03 00:13:57.362855 | orchestrator | + popd 2025-05-03 00:13:57.362879 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-03 00:13:57.363329 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-03 00:13:57.363359 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-03 00:13:57.413294 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-03 00:13:57.453908 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-03 00:13:57.454120 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-03 00:13:57.454180 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-03 00:13:57.454195 | orchestrator | + source /opt/venv/bin/activate 2025-05-03 00:13:57.454208 | orchestrator | ++ deactivate nondestructive 2025-05-03 00:13:57.454220 | orchestrator | ++ '[' -n '' ']' 2025-05-03 00:13:57.454231 | orchestrator | ++ '[' -n '' ']' 2025-05-03 00:13:57.454243 | orchestrator | ++ hash -r 2025-05-03 00:13:57.454342 | orchestrator | ++ '[' -n '' ']' 2025-05-03 00:13:57.454356 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-03 00:13:57.454368 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-03 00:13:57.454388 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-03 00:13:57.454405 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-03 00:13:57.454503 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-03 00:13:57.454517 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-03 00:13:57.454528 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-03 00:13:57.454540 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-03 00:13:57.454552 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-03 00:13:57.454564 | orchestrator | ++ export PATH 2025-05-03 00:13:57.454579 | orchestrator | ++ '[' -n '' ']' 2025-05-03 00:13:58.601385 | orchestrator | ++ '[' -z '' ']' 2025-05-03 00:13:58.601551 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-03 00:13:58.601573 | orchestrator | ++ PS1='(venv) ' 2025-05-03 00:13:58.601589 | orchestrator | ++ export PS1 2025-05-03 00:13:58.601604 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-03 00:13:58.601618 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-03 00:13:58.601635 | orchestrator | ++ hash -r 2025-05-03 00:13:58.601650 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-03 00:13:58.601682 | orchestrator | 2025-05-03 00:13:59.150948 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-03 00:13:59.151091 | orchestrator | 2025-05-03 00:13:59.151170 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-03 00:13:59.151223 | orchestrator | ok: [testbed-manager] 2025-05-03 00:14:00.092538 | orchestrator | 2025-05-03 00:14:00.092658 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-03 00:14:00.092696 | orchestrator | changed: [testbed-manager] 2025-05-03 00:14:02.360855 | orchestrator | 2025-05-03 00:14:02.360991 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-03 00:14:02.361012 | orchestrator | 2025-05-03 00:14:02.361027 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:14:02.361059 | orchestrator | ok: [testbed-manager] 2025-05-03 00:14:06.928205 | orchestrator | 2025-05-03 00:14:06.928326 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-03 00:14:06.928395 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-03 00:15:22.737133 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-05-03 00:15:22.737323 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-03 00:15:22.737343 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-03 00:15:22.737359 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-03 00:15:22.737375 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-05-03 00:15:22.737389 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-03 00:15:22.737404 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-03 00:15:22.737418 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-03 00:15:22.737441 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-05-03 00:15:22.737457 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-05-03 00:15:22.737472 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-05-03 00:15:22.737486 | orchestrator | 2025-05-03 00:15:22.737501 | orchestrator | TASK [Check status] ************************************************************ 2025-05-03 00:15:22.737532 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-03 00:15:22.786955 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-03 00:15:22.787056 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-03 00:15:22.787071 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-03 00:15:22.787086 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (116 retries left). 2025-05-03 00:15:22.787105 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j870095376710.1585', 'results_file': '/home/dragon/.ansible_async/j870095376710.1585', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787140 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j239064767965.1610', 'results_file': '/home/dragon/.ansible_async/j239064767965.1610', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787200 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-03 00:15:22.787226 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j51143535825.1635', 'results_file': '/home/dragon/.ansible_async/j51143535825.1635', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787261 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j789089917349.1667', 'results_file': '/home/dragon/.ansible_async/j789089917349.1667', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787291 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j833343948918.1699', 'results_file': '/home/dragon/.ansible_async/j833343948918.1699', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787318 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j52813741629.1731', 'results_file': '/home/dragon/.ansible_async/j52813741629.1731', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787342 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-03 00:15:22.787367 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j378319406203.1763', 'results_file': '/home/dragon/.ansible_async/j378319406203.1763', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787432 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j985024137976.1795', 'results_file': '/home/dragon/.ansible_async/j985024137976.1795', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787450 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j411374063536.1827', 'results_file': '/home/dragon/.ansible_async/j411374063536.1827', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787464 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j969746807043.1859', 'results_file': '/home/dragon/.ansible_async/j969746807043.1859', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787478 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j389540548109.1891', 'results_file': '/home/dragon/.ansible_async/j389540548109.1891', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787492 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j346085475116.1932', 'results_file': '/home/dragon/.ansible_async/j346085475116.1932', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-03 00:15:22.787507 | orchestrator | 2025-05-03 00:15:22.787525 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-03 00:15:22.787557 | orchestrator | ok: [testbed-manager] 2025-05-03 00:15:23.256062 | orchestrator | 2025-05-03 00:15:23.256216 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-03 00:15:23.256253 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:23.589812 | orchestrator | 2025-05-03 00:15:23.589927 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-03 00:15:23.589960 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:23.926132 | orchestrator | 2025-05-03 00:15:23.926312 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-03 00:15:23.926351 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:23.975635 | orchestrator | 2025-05-03 00:15:23.975732 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-03 00:15:23.975763 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:15:24.308213 | orchestrator | 2025-05-03 00:15:24.308333 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-03 00:15:24.308366 | orchestrator | ok: [testbed-manager] 2025-05-03 00:15:24.424629 | orchestrator | 2025-05-03 00:15:24.424745 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-03 00:15:24.424780 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:15:26.206121 | orchestrator | 2025-05-03 00:15:26.206283 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-03 00:15:26.206306 | orchestrator | 2025-05-03 00:15:26.206322 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:15:26.206353 | orchestrator | ok: [testbed-manager] 2025-05-03 00:15:26.310821 | orchestrator | 2025-05-03 00:15:26.310935 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-03 00:15:26.310970 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-03 00:15:26.368946 | orchestrator | 2025-05-03 00:15:26.369050 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-03 00:15:26.369084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-03 00:15:27.497954 | orchestrator | 2025-05-03 00:15:27.498212 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-03 00:15:27.498257 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-03 00:15:29.341204 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-03 00:15:29.341358 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-03 00:15:29.341381 | orchestrator | 2025-05-03 00:15:29.341409 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-03 00:15:29.341443 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-03 00:15:29.997609 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-03 00:15:29.997724 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-03 00:15:29.997744 | orchestrator | 2025-05-03 00:15:29.997760 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-03 00:15:29.997793 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:15:30.634379 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:30.634508 | orchestrator | 2025-05-03 00:15:30.634544 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-03 00:15:30.634591 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:15:30.693801 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:30.693997 | orchestrator | 2025-05-03 00:15:30.694071 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-03 00:15:30.694108 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:15:31.062729 | orchestrator | 2025-05-03 00:15:31.062844 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-03 00:15:31.062880 | orchestrator | ok: [testbed-manager] 2025-05-03 00:15:31.132383 | orchestrator | 2025-05-03 00:15:31.132492 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-03 00:15:31.132526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-03 00:15:32.196782 | orchestrator | 2025-05-03 00:15:32.196915 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-03 00:15:32.196954 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:32.991809 | orchestrator | 2025-05-03 00:15:32.991938 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-03 00:15:32.991978 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:36.405803 | orchestrator | 2025-05-03 00:15:36.405940 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-03 00:15:36.405981 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:36.528592 | orchestrator | 2025-05-03 00:15:36.528704 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-03 00:15:36.528740 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-03 00:15:36.580045 | orchestrator | 2025-05-03 00:15:36.580220 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-03 00:15:36.580267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-03 00:15:39.055417 | orchestrator | 2025-05-03 00:15:39.055515 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-03 00:15:39.055541 | orchestrator | ok: [testbed-manager] 2025-05-03 00:15:39.164131 | orchestrator | 2025-05-03 00:15:39.164286 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-03 00:15:39.164320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-03 00:15:40.304547 | orchestrator | 2025-05-03 00:15:40.304654 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-03 00:15:40.304684 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-03 00:15:40.371835 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-03 00:15:40.371949 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-03 00:15:40.371967 | orchestrator | 2025-05-03 00:15:40.371982 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-03 00:15:40.372042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-03 00:15:40.994956 | orchestrator | 2025-05-03 00:15:40.995081 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-03 00:15:40.995119 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-03 00:15:41.625755 | orchestrator | 2025-05-03 00:15:41.625879 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-03 00:15:41.625926 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:42.255670 | orchestrator | 2025-05-03 00:15:42.255789 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-03 00:15:42.255828 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:15:42.654197 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:42.654340 | orchestrator | 2025-05-03 00:15:42.654361 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-03 00:15:42.654394 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:42.988262 | orchestrator | 2025-05-03 00:15:42.988388 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-03 00:15:42.988427 | orchestrator | ok: [testbed-manager] 2025-05-03 00:15:43.034218 | orchestrator | 2025-05-03 00:15:43.034335 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-03 00:15:43.034371 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:15:43.672291 | orchestrator | 2025-05-03 00:15:43.672445 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-03 00:15:43.672496 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:43.747265 | orchestrator | 2025-05-03 00:15:43.747423 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-03 00:15:43.747481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-03 00:15:44.508262 | orchestrator | 2025-05-03 00:15:44.508393 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-03 00:15:44.508430 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-03 00:15:45.182531 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-03 00:15:45.182654 | orchestrator | 2025-05-03 00:15:45.182677 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-03 00:15:45.182709 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-03 00:15:45.825606 | orchestrator | 2025-05-03 00:15:45.825732 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-03 00:15:45.825769 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:45.878272 | orchestrator | 2025-05-03 00:15:45.878390 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-03 00:15:45.878425 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:15:46.501701 | orchestrator | 2025-05-03 00:15:46.501825 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-03 00:15:46.501861 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:48.306564 | orchestrator | 2025-05-03 00:15:48.306705 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-03 00:15:48.306744 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:15:54.127475 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:15:54.127614 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:15:54.127634 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:54.127651 | orchestrator | 2025-05-03 00:15:54.127667 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-03 00:15:54.127699 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-03 00:15:54.756138 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-03 00:15:54.756294 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-03 00:15:54.756309 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-03 00:15:54.756319 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-03 00:15:54.756330 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-03 00:15:54.756368 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-03 00:15:54.756378 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-03 00:15:54.756389 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-03 00:15:54.756399 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-03 00:15:54.756409 | orchestrator | 2025-05-03 00:15:54.756419 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-03 00:15:54.756445 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-03 00:15:54.839424 | orchestrator | 2025-05-03 00:15:54.839531 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-03 00:15:54.839564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-03 00:15:55.541752 | orchestrator | 2025-05-03 00:15:55.541866 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-03 00:15:55.541899 | orchestrator | changed: [testbed-manager] 2025-05-03 00:15:56.170173 | orchestrator | 2025-05-03 00:15:56.170294 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-03 00:15:56.170329 | orchestrator | ok: [testbed-manager] 2025-05-03 00:15:56.893090 | orchestrator | 2025-05-03 00:15:56.893277 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-03 00:15:56.893315 | orchestrator | changed: [testbed-manager] 2025-05-03 00:16:02.670320 | orchestrator | 2025-05-03 00:16:02.670453 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-03 00:16:02.670491 | orchestrator | changed: [testbed-manager] 2025-05-03 00:16:03.565422 | orchestrator | 2025-05-03 00:16:03.565549 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-03 00:16:03.565585 | orchestrator | ok: [testbed-manager] 2025-05-03 00:16:25.688286 | orchestrator | 2025-05-03 00:16:25.688450 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-03 00:16:25.688502 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-03 00:16:25.746816 | orchestrator | ok: [testbed-manager] 2025-05-03 00:16:25.746935 | orchestrator | 2025-05-03 00:16:25.746954 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-03 00:16:25.746988 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:16:25.780598 | orchestrator | 2025-05-03 00:16:25.780707 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-03 00:16:25.780727 | orchestrator | 2025-05-03 00:16:25.780744 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-03 00:16:25.780774 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:16:25.847603 | orchestrator | 2025-05-03 00:16:25.847711 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-03 00:16:25.847745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-03 00:16:26.640361 | orchestrator | 2025-05-03 00:16:26.640481 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-03 00:16:26.640518 | orchestrator | ok: [testbed-manager] 2025-05-03 00:16:26.714109 | orchestrator | 2025-05-03 00:16:26.714280 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-03 00:16:26.714316 | orchestrator | ok: [testbed-manager] 2025-05-03 00:16:26.774581 | orchestrator | 2025-05-03 00:16:26.774716 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-03 00:16:26.774752 | orchestrator | ok: [testbed-manager] => { 2025-05-03 00:16:27.412633 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-03 00:16:27.412753 | orchestrator | } 2025-05-03 00:16:27.412774 | orchestrator | 2025-05-03 00:16:27.412790 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-03 00:16:27.412822 | orchestrator | ok: [testbed-manager] 2025-05-03 00:16:28.257926 | orchestrator | 2025-05-03 00:16:28.258106 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-03 00:16:28.258237 | orchestrator | ok: [testbed-manager] 2025-05-03 00:16:28.328599 | orchestrator | 2025-05-03 00:16:28.328705 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-03 00:16:28.328738 | orchestrator | ok: [testbed-manager] 2025-05-03 00:16:28.374644 | orchestrator | 2025-05-03 00:16:28.374739 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-03 00:16:28.374782 | orchestrator | ok: [testbed-manager] => { 2025-05-03 00:16:28.436909 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-03 00:16:28.437024 | orchestrator | } 2025-05-03 00:16:28.437044 | orchestrator | 2025-05-03 00:16:28.437061 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-03 00:16:28.437092 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:16:28.495461 | orchestrator | 2025-05-03 00:16:28.495553 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-03 00:16:28.495585 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:16:28.554503 | orchestrator | 2025-05-03 00:16:28.554592 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-03 00:16:28.554622 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:16:28.606562 | orchestrator | 2025-05-03 00:16:28.606640 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-03 00:16:28.606672 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:16:28.657377 | orchestrator | 2025-05-03 00:16:28.657455 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-03 00:16:28.657503 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:16:28.722555 | orchestrator | 2025-05-03 00:16:28.722630 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-03 00:16:28.722661 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:16:29.895519 | orchestrator | 2025-05-03 00:16:29.895669 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-03 00:16:29.895713 | orchestrator | changed: [testbed-manager] 2025-05-03 00:16:29.967969 | orchestrator | 2025-05-03 00:16:29.968083 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-03 00:16:29.968119 | orchestrator | ok: [testbed-manager] 2025-05-03 00:17:30.032037 | orchestrator | 2025-05-03 00:17:30.032158 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-03 00:17:30.032226 | orchestrator | Pausing for 60 seconds 2025-05-03 00:17:30.079269 | orchestrator | changed: [testbed-manager] 2025-05-03 00:17:30.079373 | orchestrator | 2025-05-03 00:17:30.079392 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-03 00:17:30.079423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-03 00:21:09.844689 | orchestrator | 2025-05-03 00:21:09.844833 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-03 00:21:09.844873 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-03 00:21:11.719227 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-03 00:21:11.719362 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-03 00:21:11.719401 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-03 00:21:11.719429 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-03 00:21:11.719455 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-03 00:21:11.719481 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-03 00:21:11.719498 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-03 00:21:11.719512 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-03 00:21:11.719527 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-03 00:21:11.719569 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-03 00:21:11.719584 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-03 00:21:11.719599 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-03 00:21:11.719613 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-03 00:21:11.719627 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-03 00:21:11.719646 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-03 00:21:11.719670 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-03 00:21:11.719694 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-03 00:21:11.719733 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-03 00:21:11.719772 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-03 00:21:11.719800 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-03 00:21:11.719825 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:11.719846 | orchestrator | 2025-05-03 00:21:11.719864 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-03 00:21:11.719880 | orchestrator | 2025-05-03 00:21:11.719897 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:21:11.719927 | orchestrator | ok: [testbed-manager] 2025-05-03 00:21:11.823655 | orchestrator | 2025-05-03 00:21:11.823752 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-03 00:21:11.823784 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-03 00:21:11.878872 | orchestrator | 2025-05-03 00:21:11.878962 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-03 00:21:11.878993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-03 00:21:13.379694 | orchestrator | 2025-05-03 00:21:13.379802 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-03 00:21:13.379836 | orchestrator | ok: [testbed-manager] 2025-05-03 00:21:13.428955 | orchestrator | 2025-05-03 00:21:13.429043 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-03 00:21:13.429072 | orchestrator | ok: [testbed-manager] 2025-05-03 00:21:13.522283 | orchestrator | 2025-05-03 00:21:13.522382 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-03 00:21:13.522415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-03 00:21:16.109530 | orchestrator | 2025-05-03 00:21:16.109664 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-03 00:21:16.109702 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-03 00:21:16.751908 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-03 00:21:16.752029 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-03 00:21:16.752049 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-03 00:21:16.752065 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-03 00:21:16.752081 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-03 00:21:16.752096 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-03 00:21:16.752110 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-03 00:21:16.752125 | orchestrator | 2025-05-03 00:21:16.752140 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-03 00:21:16.752170 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:16.836011 | orchestrator | 2025-05-03 00:21:16.836137 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-03 00:21:16.836173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-03 00:21:18.103562 | orchestrator | 2025-05-03 00:21:18.103720 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-03 00:21:18.103773 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-03 00:21:18.725591 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-03 00:21:18.725710 | orchestrator | 2025-05-03 00:21:18.725729 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-03 00:21:18.725760 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:18.799320 | orchestrator | 2025-05-03 00:21:18.799405 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-03 00:21:18.799438 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:21:18.864818 | orchestrator | 2025-05-03 00:21:18.864932 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-03 00:21:18.864965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-03 00:21:20.223298 | orchestrator | 2025-05-03 00:21:20.223422 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-03 00:21:20.223461 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:21:20.835369 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:21:20.835487 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:20.835508 | orchestrator | 2025-05-03 00:21:20.835524 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-03 00:21:20.835555 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:20.922006 | orchestrator | 2025-05-03 00:21:20.922176 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-03 00:21:20.922259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-03 00:21:21.549381 | orchestrator | 2025-05-03 00:21:21.549506 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-03 00:21:21.549541 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:21:22.150340 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:22.150443 | orchestrator | 2025-05-03 00:21:22.150458 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-03 00:21:22.150481 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:22.236114 | orchestrator | 2025-05-03 00:21:22.236259 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-03 00:21:22.236294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-03 00:21:22.830246 | orchestrator | 2025-05-03 00:21:22.830344 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-03 00:21:22.830381 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:23.236821 | orchestrator | 2025-05-03 00:21:23.236945 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-03 00:21:23.236981 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:24.451407 | orchestrator | 2025-05-03 00:21:24.451556 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-03 00:21:24.451596 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-03 00:21:25.210007 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-03 00:21:25.210223 | orchestrator | 2025-05-03 00:21:25.210248 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-03 00:21:25.210279 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:25.605257 | orchestrator | 2025-05-03 00:21:25.605385 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-03 00:21:25.605422 | orchestrator | ok: [testbed-manager] 2025-05-03 00:21:25.964903 | orchestrator | 2025-05-03 00:21:25.965026 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-03 00:21:25.965098 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:26.013152 | orchestrator | 2025-05-03 00:21:26.013282 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-03 00:21:26.013304 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:21:26.104031 | orchestrator | 2025-05-03 00:21:26.104154 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-03 00:21:26.104232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-03 00:21:26.147689 | orchestrator | 2025-05-03 00:21:26.147788 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-03 00:21:26.147822 | orchestrator | ok: [testbed-manager] 2025-05-03 00:21:28.167466 | orchestrator | 2025-05-03 00:21:28.167625 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-03 00:21:28.167675 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-03 00:21:28.854552 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-03 00:21:28.854675 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-03 00:21:28.854694 | orchestrator | 2025-05-03 00:21:28.854709 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-03 00:21:28.854740 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:29.564166 | orchestrator | 2025-05-03 00:21:29.564355 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-03 00:21:29.564393 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:30.266650 | orchestrator | 2025-05-03 00:21:30.266777 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-03 00:21:30.266817 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:30.353551 | orchestrator | 2025-05-03 00:21:30.353668 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-03 00:21:30.353703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-03 00:21:30.411234 | orchestrator | 2025-05-03 00:21:30.411345 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-03 00:21:30.411378 | orchestrator | ok: [testbed-manager] 2025-05-03 00:21:31.098648 | orchestrator | 2025-05-03 00:21:31.098770 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-03 00:21:31.098804 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-03 00:21:31.194379 | orchestrator | 2025-05-03 00:21:31.194485 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-03 00:21:31.194519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-03 00:21:31.860805 | orchestrator | 2025-05-03 00:21:31.860928 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-03 00:21:31.860963 | orchestrator | changed: [testbed-manager] 2025-05-03 00:21:32.476029 | orchestrator | 2025-05-03 00:21:32.476268 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-03 00:21:32.476306 | orchestrator | ok: [testbed-manager] 2025-05-03 00:21:32.533984 | orchestrator | 2025-05-03 00:21:32.534136 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-03 00:21:32.534171 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:21:32.585177 | orchestrator | 2025-05-03 00:21:32.585309 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-03 00:21:32.585341 | orchestrator | ok: [testbed-manager] 2025-05-03 00:21:33.398911 | orchestrator | 2025-05-03 00:21:33.399047 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-03 00:21:33.399100 | orchestrator | changed: [testbed-manager] 2025-05-03 00:22:14.258938 | orchestrator | 2025-05-03 00:22:14.259079 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-03 00:22:14.259117 | orchestrator | changed: [testbed-manager] 2025-05-03 00:22:14.909308 | orchestrator | 2025-05-03 00:22:14.909433 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-03 00:22:14.909471 | orchestrator | ok: [testbed-manager] 2025-05-03 00:22:17.514416 | orchestrator | 2025-05-03 00:22:17.514502 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-03 00:22:17.514524 | orchestrator | changed: [testbed-manager] 2025-05-03 00:22:17.573805 | orchestrator | 2025-05-03 00:22:17.573918 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-03 00:22:17.573952 | orchestrator | ok: [testbed-manager] 2025-05-03 00:22:17.623145 | orchestrator | 2025-05-03 00:22:17.623278 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-03 00:22:17.623296 | orchestrator | 2025-05-03 00:22:17.623311 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-03 00:22:17.623340 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:23:17.681512 | orchestrator | 2025-05-03 00:23:17.681655 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-03 00:23:17.681694 | orchestrator | Pausing for 60 seconds 2025-05-03 00:23:22.713967 | orchestrator | changed: [testbed-manager] 2025-05-03 00:23:22.714260 | orchestrator | 2025-05-03 00:23:22.714288 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-03 00:23:22.714323 | orchestrator | changed: [testbed-manager] 2025-05-03 00:24:04.305083 | orchestrator | 2025-05-03 00:24:04.305284 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-03 00:24:04.305324 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-03 00:24:09.796196 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-03 00:24:09.796398 | orchestrator | changed: [testbed-manager] 2025-05-03 00:24:09.796421 | orchestrator | 2025-05-03 00:24:09.796451 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-03 00:24:09.796483 | orchestrator | changed: [testbed-manager] 2025-05-03 00:24:09.901475 | orchestrator | 2025-05-03 00:24:09.901592 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-03 00:24:09.901626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-03 00:24:09.955416 | orchestrator | 2025-05-03 00:24:09.955522 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-03 00:24:09.955540 | orchestrator | 2025-05-03 00:24:09.955555 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-03 00:24:09.955585 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:24:10.069863 | orchestrator | 2025-05-03 00:24:10.069968 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:24:10.069987 | orchestrator | testbed-manager : ok=109 changed=58 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-03 00:24:10.070003 | orchestrator | 2025-05-03 00:24:10.070148 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-03 00:24:10.079520 | orchestrator | + deactivate 2025-05-03 00:24:10.079548 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-03 00:24:10.079564 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-03 00:24:10.079578 | orchestrator | + export PATH 2025-05-03 00:24:10.079593 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-03 00:24:10.079607 | orchestrator | + '[' -n '' ']' 2025-05-03 00:24:10.079622 | orchestrator | + hash -r 2025-05-03 00:24:10.079636 | orchestrator | + '[' -n '' ']' 2025-05-03 00:24:10.079650 | orchestrator | + unset VIRTUAL_ENV 2025-05-03 00:24:10.079663 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-03 00:24:10.079678 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-03 00:24:10.079692 | orchestrator | + unset -f deactivate 2025-05-03 00:24:10.079707 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-03 00:24:10.079727 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-03 00:24:10.080606 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-03 00:24:10.080630 | orchestrator | + local max_attempts=60 2025-05-03 00:24:10.080645 | orchestrator | + local name=ceph-ansible 2025-05-03 00:24:10.080659 | orchestrator | + local attempt_num=1 2025-05-03 00:24:10.080678 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-03 00:24:10.115654 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-03 00:24:10.116331 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-03 00:24:10.116360 | orchestrator | + local max_attempts=60 2025-05-03 00:24:10.116377 | orchestrator | + local name=kolla-ansible 2025-05-03 00:24:10.116393 | orchestrator | + local attempt_num=1 2025-05-03 00:24:10.116413 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-03 00:24:10.145628 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-03 00:24:10.146162 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-03 00:24:10.146198 | orchestrator | + local max_attempts=60 2025-05-03 00:24:10.146214 | orchestrator | + local name=osism-ansible 2025-05-03 00:24:10.146229 | orchestrator | + local attempt_num=1 2025-05-03 00:24:10.146249 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-03 00:24:10.171227 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-03 00:24:10.812801 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-03 00:24:10.812915 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-03 00:24:10.812951 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-03 00:24:10.858292 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-03 00:24:11.082200 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-03 00:24:11.082308 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-03 00:24:11.082344 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-03 00:24:11.089844 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.089871 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.089885 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-03 00:24:11.089921 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-03 00:24:11.089937 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.089955 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.089969 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.089983 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-03 00:24:11.089998 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.090012 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-03 00:24:11.090066 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.090111 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.090152 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-03 00:24:11.090167 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.090181 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.090195 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.090209 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-03 00:24:11.090230 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-03 00:24:11.230654 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-03 00:24:11.238356 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-03 00:24:11.238415 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-03 00:24:11.238432 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-03 00:24:11.238448 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-03 00:24:11.238473 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-03 00:24:11.293142 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-03 00:24:11.298414 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-03 00:24:11.298552 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-03 00:24:12.837586 | orchestrator | 2025-05-03 00:24:12 | INFO  | Task 03c55d6e-20fa-4b10-9b39-4d9551b0d84e (resolvconf) was prepared for execution. 2025-05-03 00:24:15.821786 | orchestrator | 2025-05-03 00:24:12 | INFO  | It takes a moment until task 03c55d6e-20fa-4b10-9b39-4d9551b0d84e (resolvconf) has been started and output is visible here. 2025-05-03 00:24:15.821934 | orchestrator | 2025-05-03 00:24:15.822952 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-03 00:24:15.824647 | orchestrator | 2025-05-03 00:24:15.825288 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:24:15.825838 | orchestrator | Saturday 03 May 2025 00:24:15 +0000 (0:00:00.086) 0:00:00.086 ********** 2025-05-03 00:24:19.700326 | orchestrator | ok: [testbed-manager] 2025-05-03 00:24:19.700684 | orchestrator | 2025-05-03 00:24:19.702545 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-03 00:24:19.702918 | orchestrator | Saturday 03 May 2025 00:24:19 +0000 (0:00:03.880) 0:00:03.967 ********** 2025-05-03 00:24:19.760773 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:24:19.761070 | orchestrator | 2025-05-03 00:24:19.761651 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-03 00:24:19.762139 | orchestrator | Saturday 03 May 2025 00:24:19 +0000 (0:00:00.061) 0:00:04.029 ********** 2025-05-03 00:24:19.856536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-03 00:24:19.857874 | orchestrator | 2025-05-03 00:24:19.858189 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-03 00:24:19.858924 | orchestrator | Saturday 03 May 2025 00:24:19 +0000 (0:00:00.095) 0:00:04.124 ********** 2025-05-03 00:24:19.935637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-03 00:24:19.935898 | orchestrator | 2025-05-03 00:24:19.936199 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-03 00:24:19.936486 | orchestrator | Saturday 03 May 2025 00:24:19 +0000 (0:00:00.080) 0:00:04.205 ********** 2025-05-03 00:24:21.039956 | orchestrator | ok: [testbed-manager] 2025-05-03 00:24:21.040145 | orchestrator | 2025-05-03 00:24:21.041104 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-03 00:24:21.042634 | orchestrator | Saturday 03 May 2025 00:24:21 +0000 (0:00:01.100) 0:00:05.305 ********** 2025-05-03 00:24:21.100302 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:24:21.100866 | orchestrator | 2025-05-03 00:24:21.101742 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-03 00:24:21.102436 | orchestrator | Saturday 03 May 2025 00:24:21 +0000 (0:00:00.062) 0:00:05.368 ********** 2025-05-03 00:24:21.579161 | orchestrator | ok: [testbed-manager] 2025-05-03 00:24:21.579391 | orchestrator | 2025-05-03 00:24:21.579804 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-03 00:24:21.580704 | orchestrator | Saturday 03 May 2025 00:24:21 +0000 (0:00:00.478) 0:00:05.847 ********** 2025-05-03 00:24:21.662320 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:24:21.662519 | orchestrator | 2025-05-03 00:24:21.663019 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-03 00:24:21.663477 | orchestrator | Saturday 03 May 2025 00:24:21 +0000 (0:00:00.082) 0:00:05.929 ********** 2025-05-03 00:24:22.218611 | orchestrator | changed: [testbed-manager] 2025-05-03 00:24:22.219480 | orchestrator | 2025-05-03 00:24:22.219536 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-03 00:24:22.220964 | orchestrator | Saturday 03 May 2025 00:24:22 +0000 (0:00:00.556) 0:00:06.486 ********** 2025-05-03 00:24:23.313173 | orchestrator | changed: [testbed-manager] 2025-05-03 00:24:23.313542 | orchestrator | 2025-05-03 00:24:23.313981 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-03 00:24:23.314427 | orchestrator | Saturday 03 May 2025 00:24:23 +0000 (0:00:01.094) 0:00:07.580 ********** 2025-05-03 00:24:24.263163 | orchestrator | ok: [testbed-manager] 2025-05-03 00:24:24.263798 | orchestrator | 2025-05-03 00:24:24.264455 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-03 00:24:24.265092 | orchestrator | Saturday 03 May 2025 00:24:24 +0000 (0:00:00.950) 0:00:08.530 ********** 2025-05-03 00:24:24.345393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-03 00:24:24.346371 | orchestrator | 2025-05-03 00:24:24.346979 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-03 00:24:24.348030 | orchestrator | Saturday 03 May 2025 00:24:24 +0000 (0:00:00.083) 0:00:08.614 ********** 2025-05-03 00:24:25.477784 | orchestrator | changed: [testbed-manager] 2025-05-03 00:24:25.478419 | orchestrator | 2025-05-03 00:24:25.480283 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:24:25.480590 | orchestrator | 2025-05-03 00:24:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:24:25.481028 | orchestrator | 2025-05-03 00:24:25 | INFO  | Please wait and do not abort execution. 2025-05-03 00:24:25.481062 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:24:25.482232 | orchestrator | 2025-05-03 00:24:25.482876 | orchestrator | Saturday 03 May 2025 00:24:25 +0000 (0:00:01.131) 0:00:09.745 ********** 2025-05-03 00:24:25.483815 | orchestrator | =============================================================================== 2025-05-03 00:24:25.484785 | orchestrator | Gathering Facts --------------------------------------------------------- 3.88s 2025-05-03 00:24:25.485354 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2025-05-03 00:24:25.485768 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2025-05-03 00:24:25.486589 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2025-05-03 00:24:25.487013 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2025-05-03 00:24:25.487782 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2025-05-03 00:24:25.488167 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-05-03 00:24:25.488775 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-05-03 00:24:25.489115 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-03 00:24:25.489476 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-03 00:24:25.489973 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-05-03 00:24:25.490529 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-03 00:24:25.490836 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-03 00:24:25.850570 | orchestrator | + osism apply sshconfig 2025-05-03 00:24:27.291410 | orchestrator | 2025-05-03 00:24:27 | INFO  | Task 41213414-c2f0-4ea7-b834-8979f8295720 (sshconfig) was prepared for execution. 2025-05-03 00:24:30.195841 | orchestrator | 2025-05-03 00:24:27 | INFO  | It takes a moment until task 41213414-c2f0-4ea7-b834-8979f8295720 (sshconfig) has been started and output is visible here. 2025-05-03 00:24:30.196022 | orchestrator | 2025-05-03 00:24:30.196805 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-03 00:24:30.196859 | orchestrator | 2025-05-03 00:24:30.198446 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-03 00:24:30.198988 | orchestrator | Saturday 03 May 2025 00:24:30 +0000 (0:00:00.099) 0:00:00.099 ********** 2025-05-03 00:24:30.735591 | orchestrator | ok: [testbed-manager] 2025-05-03 00:24:30.735971 | orchestrator | 2025-05-03 00:24:30.736828 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-03 00:24:30.737532 | orchestrator | Saturday 03 May 2025 00:24:30 +0000 (0:00:00.539) 0:00:00.638 ********** 2025-05-03 00:24:31.206467 | orchestrator | changed: [testbed-manager] 2025-05-03 00:24:31.206909 | orchestrator | 2025-05-03 00:24:31.207644 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-03 00:24:31.208098 | orchestrator | Saturday 03 May 2025 00:24:31 +0000 (0:00:00.473) 0:00:01.112 ********** 2025-05-03 00:24:36.725339 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-03 00:24:36.729127 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-03 00:24:36.729324 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-03 00:24:36.729350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-03 00:24:36.729372 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-03 00:24:36.729901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-03 00:24:36.731385 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-03 00:24:36.731436 | orchestrator | 2025-05-03 00:24:36.731958 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-03 00:24:36.800872 | orchestrator | Saturday 03 May 2025 00:24:36 +0000 (0:00:05.517) 0:00:06.629 ********** 2025-05-03 00:24:36.800987 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:24:36.801250 | orchestrator | 2025-05-03 00:24:37.334871 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-03 00:24:37.334992 | orchestrator | Saturday 03 May 2025 00:24:36 +0000 (0:00:00.077) 0:00:06.707 ********** 2025-05-03 00:24:37.335028 | orchestrator | changed: [testbed-manager] 2025-05-03 00:24:37.335192 | orchestrator | 2025-05-03 00:24:37.335223 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:24:37.335495 | orchestrator | 2025-05-03 00:24:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:24:37.337633 | orchestrator | 2025-05-03 00:24:37 | INFO  | Please wait and do not abort execution. 2025-05-03 00:24:37.337683 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:24:37.338406 | orchestrator | 2025-05-03 00:24:37.339033 | orchestrator | Saturday 03 May 2025 00:24:37 +0000 (0:00:00.532) 0:00:07.240 ********** 2025-05-03 00:24:37.339681 | orchestrator | =============================================================================== 2025-05-03 00:24:37.340466 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.52s 2025-05-03 00:24:37.340841 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2025-05-03 00:24:37.341572 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.53s 2025-05-03 00:24:37.342171 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2025-05-03 00:24:37.342746 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-03 00:24:37.717898 | orchestrator | + osism apply known-hosts 2025-05-03 00:24:39.101646 | orchestrator | 2025-05-03 00:24:39 | INFO  | Task 6071a156-bb78-434a-be8a-7663a48b288e (known-hosts) was prepared for execution. 2025-05-03 00:24:42.054992 | orchestrator | 2025-05-03 00:24:39 | INFO  | It takes a moment until task 6071a156-bb78-434a-be8a-7663a48b288e (known-hosts) has been started and output is visible here. 2025-05-03 00:24:42.055226 | orchestrator | 2025-05-03 00:24:42.056093 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-03 00:24:42.056741 | orchestrator | 2025-05-03 00:24:42.058897 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-03 00:24:48.051347 | orchestrator | Saturday 03 May 2025 00:24:42 +0000 (0:00:00.117) 0:00:00.117 ********** 2025-05-03 00:24:48.051507 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-03 00:24:48.053338 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-03 00:24:48.053394 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-03 00:24:48.053790 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-03 00:24:48.055508 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-03 00:24:48.056327 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-03 00:24:48.057178 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-03 00:24:48.057662 | orchestrator | 2025-05-03 00:24:48.058228 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-03 00:24:48.059089 | orchestrator | Saturday 03 May 2025 00:24:48 +0000 (0:00:05.997) 0:00:06.115 ********** 2025-05-03 00:24:48.210453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-03 00:24:48.211312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-03 00:24:48.211545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-03 00:24:48.214545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-03 00:24:48.215441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-03 00:24:48.215995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-03 00:24:48.217046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-03 00:24:48.217556 | orchestrator | 2025-05-03 00:24:48.217943 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:24:48.218488 | orchestrator | Saturday 03 May 2025 00:24:48 +0000 (0:00:00.160) 0:00:06.276 ********** 2025-05-03 00:24:49.372133 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIA885Z33QHw/bhl3/1VgBxIZaEO6NJa3ecXkPTRi5kt) 2025-05-03 00:24:49.372720 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6lxw2EgUbWs5yXcnXUAStKLEwUOi9qobsYRefGxtAuKw7NXyfhzrZ9FrtTPS87repbeszdbE7aQu/xigOMoEj0KKsxivWbDtgwiuMp1uXbRrkgoI+U+dTlaEwucnhR/3AZnNu+v8WUg1cCnulnerTHGy2Iy5dWEiS7+zP3VL/saUVNtDHmBJ4bOME0IzYJsvC2c96qYo38vXQ7rTX56wVMOshMAAPMg2rLeg2CWu+r7VpSmH06cXQhEnMhUKGarP52kGGxIwDzwP+BTmEr65WB909S7pPsImzYbnVndy91G1xQ5+ZqatTagPjcXuf/OHeIlitmw2RdKY3anXB2E76E2hbNDuFUU1RWRLdbCVw6sIDfo9YltPWsQi9uuYmR0W44d9Yfw00vjkmpkD3pJJf2ZBk36AqA/HW3MLa3AHqqXMs0Oa5U+6Vy6MgRabZSCQfqceXcWZF1uMYCQp3IJZ4magXFdazaZ5NOlqIPxumyT8/CmTe0ckiIR+0YOggdbk=) 2025-05-03 00:24:49.372782 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBF7OShQSUoWuxZAG3zJ1xkSew5x+8ZJmgj+o4pVmziKZzXBAOBZd9uX8MuNdpBA+HRW1EegiMcg371vtQIqz1g=) 2025-05-03 00:24:49.373258 | orchestrator | 2025-05-03 00:24:49.374287 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:24:49.374873 | orchestrator | Saturday 03 May 2025 00:24:49 +0000 (0:00:01.159) 0:00:07.436 ********** 2025-05-03 00:24:50.408689 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOE6ouQKzo5p46XYfzh41Rt6vopmr2prkGmqIIJi0LV3z6EGJkBr4yBmWILRvCVmT8mmi3WT1kV6qMFEboiRnMNF8YOrUd1sAOoLLGr5tW0SBXCa5wPo8EnS71ZTv4wwNpm4pooFoGcvG4DUmm84wB0j6Lkczqgcdps5groNSOF9I6zZSHq8kOM/od8LJYjTyJVxuO30eEKJSyiAFefhf7wqd9yKUmoDcABKwfPqktgp2EEC1FXAAp30wkpfa7MgJzAsUfGOrwyhVQO/fCbvXKxiPy5sfHPIErRGFzw0xUVKJZOanG3TltXYrnwHygw6EoXF4K/kXM7TKOtz6T4NeTfCDxisNNI/FN3uTci/8r+B3xXeEEXizzBKcshrCZHlSOB8BCT5+z8B6Z19z0mLkqeDgyWoU15g+s6shmdIzfvFgUpx4sPZPs0faDKSQ8OxUdXkL74JDrXGXp7d3DSF895mSXCW13STgrP0286dFgyyMwSXvqv3Vj29gZyCDiaIk=) 2025-05-03 00:24:50.409211 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITUFvKip6NgabY5QcEMo9ROtCC6tBP2KGptGAZUydn5S0gde5bfZkr2iF/pq8BfKSt6/w5WcMs8cnLgNMUhZwg=) 2025-05-03 00:24:50.410277 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILZ7kMHmOQ+JU3comPNur1lCABcp7rURKu1cUXXSpADl) 2025-05-03 00:24:50.410854 | orchestrator | 2025-05-03 00:24:50.411380 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:24:50.411861 | orchestrator | Saturday 03 May 2025 00:24:50 +0000 (0:00:01.037) 0:00:08.474 ********** 2025-05-03 00:24:51.420447 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm59Og79PwbF+ksmKabJw9CJrDa5deA8vABMV8PSPXjbxGzCfkts6POwVFuSYt5WuixEe4VRwN7dac3qF/mthPHGQeqlLmMv+B9x26WYrm5UP8Ajs9PEw4+B2Ib6q+fM9tLzgCzTfD4BwOMPnlq4f7Z6cuuxT8QTq27UDRxEaPBmAGoXYIY1QMkFopshdMaDco/I5w21XnjcgqUlTsJjemlrvsmLJ4Am8KdUMyk6BnS58cCIVTVjC12W4TTsiicxkG2WfXuRUTZoexN2shyLNbBNCajgrCsokn9f0zqwf3rU7NsrlFQpGgnElKuVd38C9Zr+cba81lA0SWmvSr3y+895dVI2o3Xnw9NPR2xRbDG9YvYAKSGFilyVeLo/3GY1btqt+Asysk1e5yuV8Fp2uTfwjB1RJtRcZA8yuEhGPhxvDDRUhXgEgdGqw+Ue0x/GL6jgyfO80cak/jEA3UHcLDZeKSP3mON97JCQOmkP7uCkaYDb4EOCo9ADVQ8hQrtx8=) 2025-05-03 00:24:51.421146 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIX4zR98BH6QPy0viQgxesM9cxZkXu52RGO771YCLL4ks87PzD0dbyAgaZgg4M4CuWME4JPwq5yK7+R9yVsM4yo=) 2025-05-03 00:24:51.421200 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIClruh6So43+U0yPevmorYz++yvDhrua87yd69Mehz9m) 2025-05-03 00:24:51.421621 | orchestrator | 2025-05-03 00:24:51.422282 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:24:51.423341 | orchestrator | Saturday 03 May 2025 00:24:51 +0000 (0:00:01.010) 0:00:09.484 ********** 2025-05-03 00:24:52.438289 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ1hK6R12lVGz4nyXd3CxOB5mVkfE42YCF/5ek+tvZTH) 2025-05-03 00:24:52.438647 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn9+x7SH4nDUWPp6dUxtekQDbaxQEdeldjrZfgBzmxB/W5JLKEUY2jnOULTo2ISrjVtRaEH2oEkvUQAxfCklssdzyKCBidsfA30eAzWTMv1URFk+ybQq6IWK1Hzyof5xVd+7fQJlgmUepVFL6MXylspGEpusRvoRj9v3b29Xkn5D5B5bzUl0dHvvabpv16IIDyuApS6uNbmXZEfgWePQ99dAXdO++6vDUzYbKAWDMRrcT7xMCtXY09KefDS3CSR0mGUSGheEyIxeC1xvD5KI5gmtkY3YIoUZm4d/7GC7ZNS6sXYs4YunO6TAs0JLl9YV5fomV1LidwJRGTUIzErfpu47IpSUHbjFrcOSV4Py4PMnahoAJvpSm0zJMcTHJ5tgp7mbeD8hnKB43LeLt70CydSP5nk4n707dVEBHkftWByTeoRZejGzaNtBsqDUG0AJfjMus0PQBQyWoQHz18D04+HhhE47y0ASKi1MPQsWl9eMjG/iY0QJp1jUz7vMdo4rM=) 2025-05-03 00:24:52.438695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrnOHHSZ/wfunZixinS7uoITh9B7C/RR4YlsTsMdtc82mvXdrYQID3GumdmFCwyM9s2Nuzur0sPJXgCtQKs2RQ=) 2025-05-03 00:24:52.439569 | orchestrator | 2025-05-03 00:24:52.439994 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:24:52.440527 | orchestrator | Saturday 03 May 2025 00:24:52 +0000 (0:00:01.018) 0:00:10.502 ********** 2025-05-03 00:24:53.437550 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGp49GdSRcn011DY+ajqty+8L3IDHx6aGDDuy7HKEpq4ArBOVHLmMfGnhzZvUwc5P8wiMq5Sp4l7jPIYF9yp2qI=) 2025-05-03 00:24:53.438845 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiiNC1jm7+YDLiuQfa2qr44QVPeAlolpc77eyeIVX9+gIfw5U8JtUAVyj8L9+WjpURggL5/I1+ESLzFRBGorBO7Nsnn7ZIZGMgIbt6Qt8mt5NLklEWlXj5EHGvAYp93c3kZ6qjWkVmk8Iy0fYKGxZfHJGc7DWdcJf2JPdUjOkaTDBOwdj+Dro840EMSRD3SlUEFfitvEgw2OEZymrosbNFM7twyYlYMuTfdL/yoKFEC1n+SHXlWj06Jqhuq/gmpTLnh4V5e/OfrzzxpYno6//KGWYSUYsg4DW17zvc+uE33V66PKa3qitvyhNtWo78mBZw6Fal09gpGNX7jBbcmDLDClCeHfrfljFSXl7UJZr2DqwA6mn9SFC+Db+2FcZPcACKB3nesXViZik6EDEk3f45U5WSnf3icawrn7JRhsZcsAGTLy+Dq0b/6z70gZpNK7HiF8s98bC5OHAAHtvlGa1edZCfFvGzjnUIAayiBkd0H0ygvDkFaEN9XWBBegOWm1c=) 2025-05-03 00:24:53.438898 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMmsvyw6aX+Xxv9/409yytgKMz85WVnv9y63jk7YjfW8) 2025-05-03 00:24:53.439411 | orchestrator | 2025-05-03 00:24:53.440337 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:24:53.440391 | orchestrator | Saturday 03 May 2025 00:24:53 +0000 (0:00:01.000) 0:00:11.502 ********** 2025-05-03 00:24:54.492411 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPU2C5D+qXTNaJtX9kXNR/MycY/1C+GU3wEvoLaHCswHHfdY1kr821lcYUZRjS9DdzjTOqxdSPV0qtemq1VFRwx4BmTJLixhJ79Z+jTQpVgyE7HLRgguYUMafE1O7Dyr6OyeNvzyDTd5dxHvsMea+SDXKu+uuBCU5Rx1maeEnqdvj9MLq+glbT2ru9n5S78lO6NVnhW9fC9Lb4igQ2meySsAvhqblUt025oPeAp74L3JVXfmYzsb8xw8zcgasMpsrGXd6q/Jost11r6zi8cm9bodxKG3EtaBoz7a37VuLHsyXNQwnCKAY44reFHLbzMqa6TRNxWmXmuP4xo/3lVwHZYyPOQ5BQOfOPfCr09kWVNzA5WF0p1GQyNl1F4aPtORnVgrEkrSnnt2Q3R6xnckLZxzN995dNKYq0qrdOph4uUCF+JuOGbqVnM1lh8BczXkFwHfmitIqRSJMbXZPxeSCj0l7C+Mob5nzANfxXarows8LzsiK1JG4F2KfEfAv7Qik=) 2025-05-03 00:24:54.493087 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOuvU+UFcLTn2RrS6Ik4V358KtP57H70JKrxjWDcmnMaxHwjnVRO1deZx8jIrBrJY6kreokWgACQEBU/jW2D1lM=) 2025-05-03 00:24:54.493590 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG6WjDb6coGcfkyTsr347bbEvc3SmLYRdgNOm+5kfDyF) 2025-05-03 00:24:54.494871 | orchestrator | 2025-05-03 00:24:54.495374 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:24:54.497486 | orchestrator | Saturday 03 May 2025 00:24:54 +0000 (0:00:01.054) 0:00:12.556 ********** 2025-05-03 00:24:55.527185 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgaev4YcJ/ZM8OsJe8jh607+9V9SY3MllI45YIgSgzYNXYFTZjnUDKj+M2EqXAiO78dqHTm5o2OjWnSloN8jNtkwiCBjU8ZIJSj0bV1Blv0nTCYOmM2JCtsWHywozt0IgjtLVnVEUGNOmHx407F9vEBXtNVS/FxkqI3FIeikIuslTwYKVT5HTJMHm1BR+QOYdiN5s3oGXzlcvWrIRFvZa/hnCqG+yxys/QZZp+822YpMXqtU0DFtY/SmDw1YrkN8vcjdCmhIN354DObcNGrobFRbhV5V+Vtj3xkULLzV4KDFJk9O2QotLGhHM701leZs8qHZ+HFxjGlf5F1kJHCERwqGE6orWTQNmhUCFkCNAnduLeHLsDSHpgcMFpv6e+d0IYPwlWzE6/eowVoM7oA01Ud+NGA0yg3/JA1is+7Jb9lsb0ikUgh1IHc0J90SDw41F7TirmGttX2kJW3KlgTtu9J3vMJH+jun69apV4nxTkGhYmXr5MuIuhvuJ8zVrdAEk=) 2025-05-03 00:24:55.527637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAC8lKBdV/KA5ubCEgL52BuBAxzJkOhmwN+AFdQwOdsvONmNSNyER27hLlSKJYEES0VHueFM1baeQoALDsFJ3TI=) 2025-05-03 00:24:55.528398 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvzaFOqejDbNeevT8FcK66TCeeZrgXqkEfUznWOyWj8) 2025-05-03 00:24:55.529220 | orchestrator | 2025-05-03 00:24:55.529723 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-03 00:24:55.530201 | orchestrator | Saturday 03 May 2025 00:24:55 +0000 (0:00:01.033) 0:00:13.590 ********** 2025-05-03 00:25:00.816536 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-03 00:25:00.817625 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-03 00:25:00.819297 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-03 00:25:00.820035 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-03 00:25:00.821989 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-03 00:25:00.822429 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-03 00:25:00.823665 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-03 00:25:00.824273 | orchestrator | 2025-05-03 00:25:00.825011 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-03 00:25:00.825474 | orchestrator | Saturday 03 May 2025 00:25:00 +0000 (0:00:05.290) 0:00:18.880 ********** 2025-05-03 00:25:00.978982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-03 00:25:00.980461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-03 00:25:00.981178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-03 00:25:00.983090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-03 00:25:00.983441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-03 00:25:00.983503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-03 00:25:00.984264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-03 00:25:00.985124 | orchestrator | 2025-05-03 00:25:00.986240 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:25:00.986535 | orchestrator | Saturday 03 May 2025 00:25:00 +0000 (0:00:00.163) 0:00:19.044 ********** 2025-05-03 00:25:02.019259 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBF7OShQSUoWuxZAG3zJ1xkSew5x+8ZJmgj+o4pVmziKZzXBAOBZd9uX8MuNdpBA+HRW1EegiMcg371vtQIqz1g=) 2025-05-03 00:25:02.020370 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6lxw2EgUbWs5yXcnXUAStKLEwUOi9qobsYRefGxtAuKw7NXyfhzrZ9FrtTPS87repbeszdbE7aQu/xigOMoEj0KKsxivWbDtgwiuMp1uXbRrkgoI+U+dTlaEwucnhR/3AZnNu+v8WUg1cCnulnerTHGy2Iy5dWEiS7+zP3VL/saUVNtDHmBJ4bOME0IzYJsvC2c96qYo38vXQ7rTX56wVMOshMAAPMg2rLeg2CWu+r7VpSmH06cXQhEnMhUKGarP52kGGxIwDzwP+BTmEr65WB909S7pPsImzYbnVndy91G1xQ5+ZqatTagPjcXuf/OHeIlitmw2RdKY3anXB2E76E2hbNDuFUU1RWRLdbCVw6sIDfo9YltPWsQi9uuYmR0W44d9Yfw00vjkmpkD3pJJf2ZBk36AqA/HW3MLa3AHqqXMs0Oa5U+6Vy6MgRabZSCQfqceXcWZF1uMYCQp3IJZ4magXFdazaZ5NOlqIPxumyT8/CmTe0ckiIR+0YOggdbk=) 2025-05-03 00:25:02.020432 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIA885Z33QHw/bhl3/1VgBxIZaEO6NJa3ecXkPTRi5kt) 2025-05-03 00:25:02.020880 | orchestrator | 2025-05-03 00:25:02.020910 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:25:02.021648 | orchestrator | Saturday 03 May 2025 00:25:02 +0000 (0:00:01.039) 0:00:20.084 ********** 2025-05-03 00:25:03.023557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOE6ouQKzo5p46XYfzh41Rt6vopmr2prkGmqIIJi0LV3z6EGJkBr4yBmWILRvCVmT8mmi3WT1kV6qMFEboiRnMNF8YOrUd1sAOoLLGr5tW0SBXCa5wPo8EnS71ZTv4wwNpm4pooFoGcvG4DUmm84wB0j6Lkczqgcdps5groNSOF9I6zZSHq8kOM/od8LJYjTyJVxuO30eEKJSyiAFefhf7wqd9yKUmoDcABKwfPqktgp2EEC1FXAAp30wkpfa7MgJzAsUfGOrwyhVQO/fCbvXKxiPy5sfHPIErRGFzw0xUVKJZOanG3TltXYrnwHygw6EoXF4K/kXM7TKOtz6T4NeTfCDxisNNI/FN3uTci/8r+B3xXeEEXizzBKcshrCZHlSOB8BCT5+z8B6Z19z0mLkqeDgyWoU15g+s6shmdIzfvFgUpx4sPZPs0faDKSQ8OxUdXkL74JDrXGXp7d3DSF895mSXCW13STgrP0286dFgyyMwSXvqv3Vj29gZyCDiaIk=) 2025-05-03 00:25:03.024501 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITUFvKip6NgabY5QcEMo9ROtCC6tBP2KGptGAZUydn5S0gde5bfZkr2iF/pq8BfKSt6/w5WcMs8cnLgNMUhZwg=) 2025-05-03 00:25:03.025245 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILZ7kMHmOQ+JU3comPNur1lCABcp7rURKu1cUXXSpADl) 2025-05-03 00:25:03.025678 | orchestrator | 2025-05-03 00:25:03.026147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:25:03.026418 | orchestrator | Saturday 03 May 2025 00:25:03 +0000 (0:00:01.004) 0:00:21.088 ********** 2025-05-03 00:25:04.064893 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm59Og79PwbF+ksmKabJw9CJrDa5deA8vABMV8PSPXjbxGzCfkts6POwVFuSYt5WuixEe4VRwN7dac3qF/mthPHGQeqlLmMv+B9x26WYrm5UP8Ajs9PEw4+B2Ib6q+fM9tLzgCzTfD4BwOMPnlq4f7Z6cuuxT8QTq27UDRxEaPBmAGoXYIY1QMkFopshdMaDco/I5w21XnjcgqUlTsJjemlrvsmLJ4Am8KdUMyk6BnS58cCIVTVjC12W4TTsiicxkG2WfXuRUTZoexN2shyLNbBNCajgrCsokn9f0zqwf3rU7NsrlFQpGgnElKuVd38C9Zr+cba81lA0SWmvSr3y+895dVI2o3Xnw9NPR2xRbDG9YvYAKSGFilyVeLo/3GY1btqt+Asysk1e5yuV8Fp2uTfwjB1RJtRcZA8yuEhGPhxvDDRUhXgEgdGqw+Ue0x/GL6jgyfO80cak/jEA3UHcLDZeKSP3mON97JCQOmkP7uCkaYDb4EOCo9ADVQ8hQrtx8=) 2025-05-03 00:25:04.066114 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIX4zR98BH6QPy0viQgxesM9cxZkXu52RGO771YCLL4ks87PzD0dbyAgaZgg4M4CuWME4JPwq5yK7+R9yVsM4yo=) 2025-05-03 00:25:04.066832 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIClruh6So43+U0yPevmorYz++yvDhrua87yd69Mehz9m) 2025-05-03 00:25:04.066912 | orchestrator | 2025-05-03 00:25:04.067716 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:25:04.068193 | orchestrator | Saturday 03 May 2025 00:25:04 +0000 (0:00:01.039) 0:00:22.128 ********** 2025-05-03 00:25:05.122462 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ1hK6R12lVGz4nyXd3CxOB5mVkfE42YCF/5ek+tvZTH) 2025-05-03 00:25:05.122877 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn9+x7SH4nDUWPp6dUxtekQDbaxQEdeldjrZfgBzmxB/W5JLKEUY2jnOULTo2ISrjVtRaEH2oEkvUQAxfCklssdzyKCBidsfA30eAzWTMv1URFk+ybQq6IWK1Hzyof5xVd+7fQJlgmUepVFL6MXylspGEpusRvoRj9v3b29Xkn5D5B5bzUl0dHvvabpv16IIDyuApS6uNbmXZEfgWePQ99dAXdO++6vDUzYbKAWDMRrcT7xMCtXY09KefDS3CSR0mGUSGheEyIxeC1xvD5KI5gmtkY3YIoUZm4d/7GC7ZNS6sXYs4YunO6TAs0JLl9YV5fomV1LidwJRGTUIzErfpu47IpSUHbjFrcOSV4Py4PMnahoAJvpSm0zJMcTHJ5tgp7mbeD8hnKB43LeLt70CydSP5nk4n707dVEBHkftWByTeoRZejGzaNtBsqDUG0AJfjMus0PQBQyWoQHz18D04+HhhE47y0ASKi1MPQsWl9eMjG/iY0QJp1jUz7vMdo4rM=) 2025-05-03 00:25:05.123534 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrnOHHSZ/wfunZixinS7uoITh9B7C/RR4YlsTsMdtc82mvXdrYQID3GumdmFCwyM9s2Nuzur0sPJXgCtQKs2RQ=) 2025-05-03 00:25:05.124863 | orchestrator | 2025-05-03 00:25:05.125179 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:25:05.126273 | orchestrator | Saturday 03 May 2025 00:25:05 +0000 (0:00:01.058) 0:00:23.187 ********** 2025-05-03 00:25:06.124021 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMmsvyw6aX+Xxv9/409yytgKMz85WVnv9y63jk7YjfW8) 2025-05-03 00:25:06.124355 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiiNC1jm7+YDLiuQfa2qr44QVPeAlolpc77eyeIVX9+gIfw5U8JtUAVyj8L9+WjpURggL5/I1+ESLzFRBGorBO7Nsnn7ZIZGMgIbt6Qt8mt5NLklEWlXj5EHGvAYp93c3kZ6qjWkVmk8Iy0fYKGxZfHJGc7DWdcJf2JPdUjOkaTDBOwdj+Dro840EMSRD3SlUEFfitvEgw2OEZymrosbNFM7twyYlYMuTfdL/yoKFEC1n+SHXlWj06Jqhuq/gmpTLnh4V5e/OfrzzxpYno6//KGWYSUYsg4DW17zvc+uE33V66PKa3qitvyhNtWo78mBZw6Fal09gpGNX7jBbcmDLDClCeHfrfljFSXl7UJZr2DqwA6mn9SFC+Db+2FcZPcACKB3nesXViZik6EDEk3f45U5WSnf3icawrn7JRhsZcsAGTLy+Dq0b/6z70gZpNK7HiF8s98bC5OHAAHtvlGa1edZCfFvGzjnUIAayiBkd0H0ygvDkFaEN9XWBBegOWm1c=) 2025-05-03 00:25:06.124400 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGp49GdSRcn011DY+ajqty+8L3IDHx6aGDDuy7HKEpq4ArBOVHLmMfGnhzZvUwc5P8wiMq5Sp4l7jPIYF9yp2qI=) 2025-05-03 00:25:06.124807 | orchestrator | 2025-05-03 00:25:06.125351 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:25:06.126812 | orchestrator | Saturday 03 May 2025 00:25:06 +0000 (0:00:01.001) 0:00:24.189 ********** 2025-05-03 00:25:07.131652 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPU2C5D+qXTNaJtX9kXNR/MycY/1C+GU3wEvoLaHCswHHfdY1kr821lcYUZRjS9DdzjTOqxdSPV0qtemq1VFRwx4BmTJLixhJ79Z+jTQpVgyE7HLRgguYUMafE1O7Dyr6OyeNvzyDTd5dxHvsMea+SDXKu+uuBCU5Rx1maeEnqdvj9MLq+glbT2ru9n5S78lO6NVnhW9fC9Lb4igQ2meySsAvhqblUt025oPeAp74L3JVXfmYzsb8xw8zcgasMpsrGXd6q/Jost11r6zi8cm9bodxKG3EtaBoz7a37VuLHsyXNQwnCKAY44reFHLbzMqa6TRNxWmXmuP4xo/3lVwHZYyPOQ5BQOfOPfCr09kWVNzA5WF0p1GQyNl1F4aPtORnVgrEkrSnnt2Q3R6xnckLZxzN995dNKYq0qrdOph4uUCF+JuOGbqVnM1lh8BczXkFwHfmitIqRSJMbXZPxeSCj0l7C+Mob5nzANfxXarows8LzsiK1JG4F2KfEfAv7Qik=) 2025-05-03 00:25:07.132169 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG6WjDb6coGcfkyTsr347bbEvc3SmLYRdgNOm+5kfDyF) 2025-05-03 00:25:07.133394 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOuvU+UFcLTn2RrS6Ik4V358KtP57H70JKrxjWDcmnMaxHwjnVRO1deZx8jIrBrJY6kreokWgACQEBU/jW2D1lM=) 2025-05-03 00:25:07.134272 | orchestrator | 2025-05-03 00:25:07.134754 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-03 00:25:07.135408 | orchestrator | Saturday 03 May 2025 00:25:07 +0000 (0:00:01.006) 0:00:25.195 ********** 2025-05-03 00:25:08.174745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgaev4YcJ/ZM8OsJe8jh607+9V9SY3MllI45YIgSgzYNXYFTZjnUDKj+M2EqXAiO78dqHTm5o2OjWnSloN8jNtkwiCBjU8ZIJSj0bV1Blv0nTCYOmM2JCtsWHywozt0IgjtLVnVEUGNOmHx407F9vEBXtNVS/FxkqI3FIeikIuslTwYKVT5HTJMHm1BR+QOYdiN5s3oGXzlcvWrIRFvZa/hnCqG+yxys/QZZp+822YpMXqtU0DFtY/SmDw1YrkN8vcjdCmhIN354DObcNGrobFRbhV5V+Vtj3xkULLzV4KDFJk9O2QotLGhHM701leZs8qHZ+HFxjGlf5F1kJHCERwqGE6orWTQNmhUCFkCNAnduLeHLsDSHpgcMFpv6e+d0IYPwlWzE6/eowVoM7oA01Ud+NGA0yg3/JA1is+7Jb9lsb0ikUgh1IHc0J90SDw41F7TirmGttX2kJW3KlgTtu9J3vMJH+jun69apV4nxTkGhYmXr5MuIuhvuJ8zVrdAEk=) 2025-05-03 00:25:08.175224 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAC8lKBdV/KA5ubCEgL52BuBAxzJkOhmwN+AFdQwOdsvONmNSNyER27hLlSKJYEES0VHueFM1baeQoALDsFJ3TI=) 2025-05-03 00:25:08.176103 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvzaFOqejDbNeevT8FcK66TCeeZrgXqkEfUznWOyWj8) 2025-05-03 00:25:08.176408 | orchestrator | 2025-05-03 00:25:08.177134 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-03 00:25:08.177829 | orchestrator | Saturday 03 May 2025 00:25:08 +0000 (0:00:01.043) 0:00:26.238 ********** 2025-05-03 00:25:08.332738 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-03 00:25:08.333500 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-03 00:25:08.333571 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-03 00:25:08.334115 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-03 00:25:08.334729 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-03 00:25:08.335000 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-03 00:25:08.335748 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-03 00:25:08.336222 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:25:08.336599 | orchestrator | 2025-05-03 00:25:08.336957 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-03 00:25:08.337424 | orchestrator | Saturday 03 May 2025 00:25:08 +0000 (0:00:00.159) 0:00:26.398 ********** 2025-05-03 00:25:08.386139 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:25:08.386328 | orchestrator | 2025-05-03 00:25:08.386353 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-03 00:25:08.387000 | orchestrator | Saturday 03 May 2025 00:25:08 +0000 (0:00:00.052) 0:00:26.451 ********** 2025-05-03 00:25:08.445349 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:25:08.445681 | orchestrator | 2025-05-03 00:25:08.446178 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-03 00:25:08.446757 | orchestrator | Saturday 03 May 2025 00:25:08 +0000 (0:00:00.060) 0:00:26.512 ********** 2025-05-03 00:25:09.169155 | orchestrator | changed: [testbed-manager] 2025-05-03 00:25:09.169661 | orchestrator | 2025-05-03 00:25:09.170192 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:25:09.170226 | orchestrator | 2025-05-03 00:25:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:25:09.171746 | orchestrator | 2025-05-03 00:25:09 | INFO  | Please wait and do not abort execution. 2025-05-03 00:25:09.171877 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:25:09.172006 | orchestrator | 2025-05-03 00:25:09.172027 | orchestrator | Saturday 03 May 2025 00:25:09 +0000 (0:00:00.721) 0:00:27.233 ********** 2025-05-03 00:25:09.172077 | orchestrator | =============================================================================== 2025-05-03 00:25:09.173026 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.00s 2025-05-03 00:25:09.173776 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.29s 2025-05-03 00:25:09.174004 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-05-03 00:25:09.176333 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-03 00:25:09.177637 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-03 00:25:09.178214 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-03 00:25:09.179152 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-03 00:25:09.181507 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-03 00:25:09.182276 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-03 00:25:09.182705 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-03 00:25:09.183955 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-03 00:25:09.184814 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-03 00:25:09.185346 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-03 00:25:09.186175 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-03 00:25:09.186595 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-03 00:25:09.186999 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-03 00:25:09.187568 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.72s 2025-05-03 00:25:09.187827 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-05-03 00:25:09.188175 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-05-03 00:25:09.188499 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-05-03 00:25:09.493755 | orchestrator | + osism apply squid 2025-05-03 00:25:10.877935 | orchestrator | 2025-05-03 00:25:10 | INFO  | Task 93556e0b-5525-4a2a-9bfe-776d5a0ac8dc (squid) was prepared for execution. 2025-05-03 00:25:13.810725 | orchestrator | 2025-05-03 00:25:10 | INFO  | It takes a moment until task 93556e0b-5525-4a2a-9bfe-776d5a0ac8dc (squid) has been started and output is visible here. 2025-05-03 00:25:13.810843 | orchestrator | 2025-05-03 00:25:13.812866 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-03 00:25:13.813545 | orchestrator | 2025-05-03 00:25:13.816166 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-03 00:25:13.816941 | orchestrator | Saturday 03 May 2025 00:25:13 +0000 (0:00:00.105) 0:00:00.105 ********** 2025-05-03 00:25:13.896770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-03 00:25:13.898810 | orchestrator | 2025-05-03 00:25:15.306574 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-03 00:25:15.306709 | orchestrator | Saturday 03 May 2025 00:25:13 +0000 (0:00:00.089) 0:00:00.194 ********** 2025-05-03 00:25:15.306747 | orchestrator | ok: [testbed-manager] 2025-05-03 00:25:15.307135 | orchestrator | 2025-05-03 00:25:15.307185 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-03 00:25:15.307533 | orchestrator | Saturday 03 May 2025 00:25:15 +0000 (0:00:01.406) 0:00:01.601 ********** 2025-05-03 00:25:16.431934 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-03 00:25:16.432548 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-03 00:25:16.433496 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-03 00:25:16.433772 | orchestrator | 2025-05-03 00:25:16.434414 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-03 00:25:16.434938 | orchestrator | Saturday 03 May 2025 00:25:16 +0000 (0:00:01.126) 0:00:02.727 ********** 2025-05-03 00:25:17.492508 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-03 00:25:17.492745 | orchestrator | 2025-05-03 00:25:17.493895 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-03 00:25:17.494454 | orchestrator | Saturday 03 May 2025 00:25:17 +0000 (0:00:01.056) 0:00:03.783 ********** 2025-05-03 00:25:17.835171 | orchestrator | ok: [testbed-manager] 2025-05-03 00:25:17.835348 | orchestrator | 2025-05-03 00:25:17.835381 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-03 00:25:17.836950 | orchestrator | Saturday 03 May 2025 00:25:17 +0000 (0:00:00.347) 0:00:04.131 ********** 2025-05-03 00:25:18.812602 | orchestrator | changed: [testbed-manager] 2025-05-03 00:25:18.812840 | orchestrator | 2025-05-03 00:25:18.814294 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-03 00:25:18.814608 | orchestrator | Saturday 03 May 2025 00:25:18 +0000 (0:00:00.976) 0:00:05.108 ********** 2025-05-03 00:25:50.961538 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-03 00:25:50.962950 | orchestrator | ok: [testbed-manager] 2025-05-03 00:25:50.962986 | orchestrator | 2025-05-03 00:25:50.962997 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-03 00:25:50.963033 | orchestrator | Saturday 03 May 2025 00:25:50 +0000 (0:00:32.145) 0:00:37.254 ********** 2025-05-03 00:26:03.260089 | orchestrator | changed: [testbed-manager] 2025-05-03 00:27:03.340621 | orchestrator | 2025-05-03 00:27:03.340799 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-03 00:27:03.340837 | orchestrator | Saturday 03 May 2025 00:26:03 +0000 (0:00:12.296) 0:00:49.551 ********** 2025-05-03 00:27:03.341085 | orchestrator | Pausing for 60 seconds 2025-05-03 00:27:03.401204 | orchestrator | changed: [testbed-manager] 2025-05-03 00:27:03.401441 | orchestrator | 2025-05-03 00:27:03.401462 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-03 00:27:03.401479 | orchestrator | Saturday 03 May 2025 00:27:03 +0000 (0:01:00.079) 0:01:49.630 ********** 2025-05-03 00:27:03.401509 | orchestrator | ok: [testbed-manager] 2025-05-03 00:27:03.402406 | orchestrator | 2025-05-03 00:27:03.402438 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-03 00:27:03.402798 | orchestrator | Saturday 03 May 2025 00:27:03 +0000 (0:00:00.067) 0:01:49.697 ********** 2025-05-03 00:27:03.988433 | orchestrator | changed: [testbed-manager] 2025-05-03 00:27:03.989095 | orchestrator | 2025-05-03 00:27:03.989158 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:27:03.989706 | orchestrator | 2025-05-03 00:27:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:27:03.989864 | orchestrator | 2025-05-03 00:27:03 | INFO  | Please wait and do not abort execution. 2025-05-03 00:27:03.989891 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:27:03.990397 | orchestrator | 2025-05-03 00:27:03.990827 | orchestrator | Saturday 03 May 2025 00:27:03 +0000 (0:00:00.584) 0:01:50.282 ********** 2025-05-03 00:27:03.991107 | orchestrator | =============================================================================== 2025-05-03 00:27:03.991424 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-03 00:27:03.991778 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.15s 2025-05-03 00:27:03.992127 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.30s 2025-05-03 00:27:03.992506 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2025-05-03 00:27:03.992795 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2025-05-03 00:27:03.994070 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-05-03 00:27:03.994487 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2025-05-03 00:27:03.994791 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-05-03 00:27:03.995174 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-05-03 00:27:03.995490 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-03 00:27:03.995853 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-03 00:27:04.397667 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-03 00:27:04.405058 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-03 00:27:04.405178 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-03 00:27:04.464421 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-03 00:27:04.469116 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-03 00:27:04.469181 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-03 00:27:04.469210 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-03 00:27:04.473899 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-03 00:27:04.480700 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-03 00:27:05.915517 | orchestrator | 2025-05-03 00:27:05 | INFO  | Task 8591aaf9-98df-44b5-b839-c4139d2302e2 (operator) was prepared for execution. 2025-05-03 00:27:08.842448 | orchestrator | 2025-05-03 00:27:05 | INFO  | It takes a moment until task 8591aaf9-98df-44b5-b839-c4139d2302e2 (operator) has been started and output is visible here. 2025-05-03 00:27:08.842593 | orchestrator | 2025-05-03 00:27:08.842792 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-03 00:27:08.842837 | orchestrator | 2025-05-03 00:27:08.843878 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-03 00:27:12.068334 | orchestrator | Saturday 03 May 2025 00:27:08 +0000 (0:00:00.074) 0:00:00.074 ********** 2025-05-03 00:27:12.068461 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:12.068655 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:27:12.068688 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:12.068931 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:27:12.069394 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:12.073528 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:27:12.073791 | orchestrator | 2025-05-03 00:27:12.073823 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-03 00:27:12.074599 | orchestrator | Saturday 03 May 2025 00:27:12 +0000 (0:00:03.230) 0:00:03.304 ********** 2025-05-03 00:27:12.863382 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:12.863763 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:12.863808 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:27:12.864635 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:27:12.865079 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:27:12.865781 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:12.866184 | orchestrator | 2025-05-03 00:27:12.866759 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-03 00:27:12.867399 | orchestrator | 2025-05-03 00:27:12.867774 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-03 00:27:12.868447 | orchestrator | Saturday 03 May 2025 00:27:12 +0000 (0:00:00.791) 0:00:04.095 ********** 2025-05-03 00:27:12.947513 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:27:12.965127 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:27:12.997659 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:27:13.054101 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:13.055242 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:13.055275 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:13.055297 | orchestrator | 2025-05-03 00:27:13.119569 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-03 00:27:13.119639 | orchestrator | Saturday 03 May 2025 00:27:13 +0000 (0:00:00.194) 0:00:04.290 ********** 2025-05-03 00:27:13.119666 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:27:13.144148 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:27:13.167210 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:27:13.209129 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:13.209587 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:13.210393 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:13.210912 | orchestrator | 2025-05-03 00:27:13.211534 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-03 00:27:13.211949 | orchestrator | Saturday 03 May 2025 00:27:13 +0000 (0:00:00.155) 0:00:04.445 ********** 2025-05-03 00:27:13.830111 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:13.830976 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:27:13.832546 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:27:13.833347 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:27:13.835151 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:13.836315 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:13.837646 | orchestrator | 2025-05-03 00:27:13.838847 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-03 00:27:13.839573 | orchestrator | Saturday 03 May 2025 00:27:13 +0000 (0:00:00.617) 0:00:05.063 ********** 2025-05-03 00:27:14.741066 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:27:14.741824 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:27:14.741865 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:14.741902 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:14.741925 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:27:14.742422 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:14.743148 | orchestrator | 2025-05-03 00:27:14.743697 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-03 00:27:14.744371 | orchestrator | Saturday 03 May 2025 00:27:14 +0000 (0:00:00.909) 0:00:05.973 ********** 2025-05-03 00:27:15.991144 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-03 00:27:15.991689 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-03 00:27:15.991809 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-03 00:27:15.991843 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-03 00:27:15.992293 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-03 00:27:15.993063 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-03 00:27:15.993878 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-03 00:27:15.994746 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-03 00:27:15.995457 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-03 00:27:15.996036 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-03 00:27:15.996589 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-03 00:27:15.997072 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-03 00:27:15.997663 | orchestrator | 2025-05-03 00:27:15.998307 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-03 00:27:15.998577 | orchestrator | Saturday 03 May 2025 00:27:15 +0000 (0:00:01.245) 0:00:07.219 ********** 2025-05-03 00:27:17.234417 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:17.234906 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:27:17.235594 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:17.236465 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:27:17.236930 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:17.238639 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:27:17.239238 | orchestrator | 2025-05-03 00:27:17.239854 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-03 00:27:17.242417 | orchestrator | Saturday 03 May 2025 00:27:17 +0000 (0:00:01.249) 0:00:08.468 ********** 2025-05-03 00:27:18.492479 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-03 00:27:18.497023 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-03 00:27:18.497113 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-03 00:27:18.531299 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-03 00:27:18.536172 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-03 00:27:18.539315 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-03 00:27:18.539676 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-03 00:27:18.539710 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-03 00:27:18.541053 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-03 00:27:18.542542 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-03 00:27:18.543773 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-03 00:27:18.544802 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-03 00:27:18.545817 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-03 00:27:18.546218 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-03 00:27:18.547193 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-03 00:27:18.547745 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-03 00:27:18.548424 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-03 00:27:18.548909 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-03 00:27:18.549694 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-03 00:27:18.550384 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-03 00:27:18.550637 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-03 00:27:18.551363 | orchestrator | 2025-05-03 00:27:18.552040 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-03 00:27:18.552552 | orchestrator | Saturday 03 May 2025 00:27:18 +0000 (0:00:01.298) 0:00:09.767 ********** 2025-05-03 00:27:19.119780 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:27:19.120196 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:27:19.123381 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:19.124536 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:19.124732 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:27:19.125575 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:19.126319 | orchestrator | 2025-05-03 00:27:19.128540 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-03 00:27:19.129828 | orchestrator | Saturday 03 May 2025 00:27:19 +0000 (0:00:00.587) 0:00:10.354 ********** 2025-05-03 00:27:19.215900 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:27:19.239397 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:27:19.287576 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:27:19.288241 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:27:19.288281 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:27:19.288820 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:27:19.290776 | orchestrator | 2025-05-03 00:27:19.977884 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-03 00:27:19.978160 | orchestrator | Saturday 03 May 2025 00:27:19 +0000 (0:00:00.168) 0:00:10.523 ********** 2025-05-03 00:27:19.978205 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-03 00:27:19.978278 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:19.978325 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-03 00:27:19.978582 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-03 00:27:19.980437 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:27:19.980695 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:19.981142 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-03 00:27:19.981304 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-03 00:27:19.981842 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:27:19.982315 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:19.985217 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-03 00:27:19.986800 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:27:19.986916 | orchestrator | 2025-05-03 00:27:19.987001 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-03 00:27:20.020483 | orchestrator | Saturday 03 May 2025 00:27:19 +0000 (0:00:00.690) 0:00:11.213 ********** 2025-05-03 00:27:20.020595 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:27:20.060730 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:27:20.081561 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:27:20.122482 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:27:20.123468 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:27:20.123508 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:27:20.123530 | orchestrator | 2025-05-03 00:27:20.124269 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-03 00:27:20.124840 | orchestrator | Saturday 03 May 2025 00:27:20 +0000 (0:00:00.141) 0:00:11.355 ********** 2025-05-03 00:27:20.159985 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:27:20.179071 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:27:20.196943 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:27:20.223284 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:27:20.249481 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:27:20.249628 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:27:20.250259 | orchestrator | 2025-05-03 00:27:20.250585 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-03 00:27:20.250870 | orchestrator | Saturday 03 May 2025 00:27:20 +0000 (0:00:00.131) 0:00:11.486 ********** 2025-05-03 00:27:20.305500 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:27:20.318315 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:27:20.342353 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:27:20.366464 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:27:20.394697 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:27:20.397613 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:27:20.397657 | orchestrator | 2025-05-03 00:27:20.397674 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-03 00:27:20.397697 | orchestrator | Saturday 03 May 2025 00:27:20 +0000 (0:00:00.144) 0:00:11.630 ********** 2025-05-03 00:27:21.044635 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:27:21.046465 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:27:21.046588 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:27:21.047137 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:21.047172 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:21.047187 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:21.047202 | orchestrator | 2025-05-03 00:27:21.047218 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-03 00:27:21.047241 | orchestrator | Saturday 03 May 2025 00:27:21 +0000 (0:00:00.649) 0:00:12.279 ********** 2025-05-03 00:27:21.160690 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:27:21.185022 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:27:21.215681 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:27:21.333538 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:27:21.339755 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:27:21.339824 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:27:21.340452 | orchestrator | 2025-05-03 00:27:21.340492 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:27:21.344657 | orchestrator | 2025-05-03 00:27:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:27:21.344720 | orchestrator | 2025-05-03 00:27:21 | INFO  | Please wait and do not abort execution. 2025-05-03 00:27:21.344747 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:27:21.348498 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:27:21.349156 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:27:21.354441 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:27:21.358923 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:27:21.360046 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:27:21.367373 | orchestrator | 2025-05-03 00:27:21.368776 | orchestrator | Saturday 03 May 2025 00:27:21 +0000 (0:00:00.288) 0:00:12.568 ********** 2025-05-03 00:27:21.368803 | orchestrator | =============================================================================== 2025-05-03 00:27:21.368824 | orchestrator | Gathering Facts --------------------------------------------------------- 3.23s 2025-05-03 00:27:21.376408 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2025-05-03 00:27:21.378584 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-05-03 00:27:21.379053 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2025-05-03 00:27:21.379325 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.91s 2025-05-03 00:27:21.379557 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-05-03 00:27:21.382150 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-05-03 00:27:21.382786 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-05-03 00:27:21.382822 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-05-03 00:27:21.382837 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-05-03 00:27:21.382857 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2025-05-03 00:27:21.383300 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-05-03 00:27:21.383897 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-05-03 00:27:21.392392 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-05-03 00:27:21.798287 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-05-03 00:27:21.798399 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-03 00:27:21.798423 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2025-05-03 00:27:21.798477 | orchestrator | + osism apply --environment custom facts 2025-05-03 00:27:23.198454 | orchestrator | 2025-05-03 00:27:23 | INFO  | Trying to run play facts in environment custom 2025-05-03 00:27:23.246322 | orchestrator | 2025-05-03 00:27:23 | INFO  | Task 93714d22-3448-4d2b-87f1-a27300af0aa1 (facts) was prepared for execution. 2025-05-03 00:27:26.201919 | orchestrator | 2025-05-03 00:27:23 | INFO  | It takes a moment until task 93714d22-3448-4d2b-87f1-a27300af0aa1 (facts) has been started and output is visible here. 2025-05-03 00:27:26.202200 | orchestrator | 2025-05-03 00:27:26.203846 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-03 00:27:26.204643 | orchestrator | 2025-05-03 00:27:26.205590 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-03 00:27:26.205886 | orchestrator | Saturday 03 May 2025 00:27:26 +0000 (0:00:00.080) 0:00:00.082 ********** 2025-05-03 00:27:27.428176 | orchestrator | ok: [testbed-manager] 2025-05-03 00:27:28.595937 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:28.596215 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:28.596249 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:27:28.596798 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:28.600533 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:27:28.602422 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:27:28.602790 | orchestrator | 2025-05-03 00:27:28.605315 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-03 00:27:28.606778 | orchestrator | Saturday 03 May 2025 00:27:28 +0000 (0:00:02.393) 0:00:02.476 ********** 2025-05-03 00:27:29.714137 | orchestrator | ok: [testbed-manager] 2025-05-03 00:27:30.625989 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:30.626379 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:27:30.627330 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:30.628078 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:30.630209 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:27:30.630648 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:27:30.630759 | orchestrator | 2025-05-03 00:27:30.630795 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-03 00:27:30.631026 | orchestrator | 2025-05-03 00:27:30.631889 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-03 00:27:30.632256 | orchestrator | Saturday 03 May 2025 00:27:30 +0000 (0:00:02.028) 0:00:04.504 ********** 2025-05-03 00:27:30.721038 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:30.721420 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:30.722217 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:30.722636 | orchestrator | 2025-05-03 00:27:30.723033 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-03 00:27:30.723427 | orchestrator | Saturday 03 May 2025 00:27:30 +0000 (0:00:00.097) 0:00:04.601 ********** 2025-05-03 00:27:30.840604 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:30.841245 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:30.841293 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:30.841801 | orchestrator | 2025-05-03 00:27:30.842530 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-03 00:27:30.843084 | orchestrator | Saturday 03 May 2025 00:27:30 +0000 (0:00:00.118) 0:00:04.719 ********** 2025-05-03 00:27:30.973645 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:30.974234 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:30.974916 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:30.975463 | orchestrator | 2025-05-03 00:27:30.975707 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-03 00:27:30.976154 | orchestrator | Saturday 03 May 2025 00:27:30 +0000 (0:00:00.136) 0:00:04.856 ********** 2025-05-03 00:27:31.104219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:27:31.104650 | orchestrator | 2025-05-03 00:27:31.105201 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-03 00:27:31.108737 | orchestrator | Saturday 03 May 2025 00:27:31 +0000 (0:00:00.129) 0:00:04.985 ********** 2025-05-03 00:27:31.508663 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:31.510070 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:31.510595 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:31.511139 | orchestrator | 2025-05-03 00:27:31.511789 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-03 00:27:31.512353 | orchestrator | Saturday 03 May 2025 00:27:31 +0000 (0:00:00.404) 0:00:05.389 ********** 2025-05-03 00:27:31.599082 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:27:31.600092 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:27:32.581444 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:27:32.581568 | orchestrator | 2025-05-03 00:27:32.581591 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-03 00:27:32.581657 | orchestrator | Saturday 03 May 2025 00:27:31 +0000 (0:00:00.091) 0:00:05.481 ********** 2025-05-03 00:27:32.581693 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:32.581767 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:32.584187 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:33.023620 | orchestrator | 2025-05-03 00:27:33.023738 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-03 00:27:33.023757 | orchestrator | Saturday 03 May 2025 00:27:32 +0000 (0:00:00.981) 0:00:06.462 ********** 2025-05-03 00:27:33.023787 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:33.024592 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:33.024635 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:33.024669 | orchestrator | 2025-05-03 00:27:33.025200 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-03 00:27:33.025691 | orchestrator | Saturday 03 May 2025 00:27:33 +0000 (0:00:00.439) 0:00:06.901 ********** 2025-05-03 00:27:34.067129 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:34.067429 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:34.067495 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:34.068112 | orchestrator | 2025-05-03 00:27:34.068931 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-03 00:27:34.072615 | orchestrator | Saturday 03 May 2025 00:27:34 +0000 (0:00:01.044) 0:00:07.946 ********** 2025-05-03 00:27:47.288651 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:47.388279 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:47.388397 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:47.388476 | orchestrator | 2025-05-03 00:27:47.388497 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-03 00:27:47.388518 | orchestrator | Saturday 03 May 2025 00:27:47 +0000 (0:00:13.217) 0:00:21.163 ********** 2025-05-03 00:27:47.388550 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:27:47.388615 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:27:47.388636 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:27:47.389104 | orchestrator | 2025-05-03 00:27:47.389749 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-03 00:27:47.390118 | orchestrator | Saturday 03 May 2025 00:27:47 +0000 (0:00:00.106) 0:00:21.270 ********** 2025-05-03 00:27:54.498805 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:27:54.499100 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:27:54.500274 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:27:54.501572 | orchestrator | 2025-05-03 00:27:54.502348 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-03 00:27:54.503405 | orchestrator | Saturday 03 May 2025 00:27:54 +0000 (0:00:07.108) 0:00:28.378 ********** 2025-05-03 00:27:54.957726 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:54.960726 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:54.962153 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:54.962778 | orchestrator | 2025-05-03 00:27:54.963743 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-03 00:27:54.964079 | orchestrator | Saturday 03 May 2025 00:27:54 +0000 (0:00:00.459) 0:00:28.838 ********** 2025-05-03 00:27:58.440680 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-03 00:27:58.440850 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-03 00:27:58.440879 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-03 00:27:58.441164 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-03 00:27:58.442618 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-03 00:27:58.444977 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-03 00:27:58.445622 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-03 00:27:58.446580 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-03 00:27:58.447212 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-03 00:27:58.448141 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-03 00:27:58.448925 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-03 00:27:58.449344 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-03 00:27:58.449753 | orchestrator | 2025-05-03 00:27:58.450272 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-03 00:27:58.451416 | orchestrator | Saturday 03 May 2025 00:27:58 +0000 (0:00:03.482) 0:00:32.320 ********** 2025-05-03 00:27:59.628879 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:27:59.629338 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:27:59.630601 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:27:59.631043 | orchestrator | 2025-05-03 00:27:59.631972 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-03 00:27:59.632954 | orchestrator | 2025-05-03 00:27:59.634410 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-03 00:27:59.635632 | orchestrator | Saturday 03 May 2025 00:27:59 +0000 (0:00:01.188) 0:00:33.509 ********** 2025-05-03 00:28:01.387698 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:04.653189 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:04.653446 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:04.654277 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:04.654375 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:04.654970 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:04.655655 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:04.656251 | orchestrator | 2025-05-03 00:28:04.656706 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:28:04.657086 | orchestrator | 2025-05-03 00:28:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:28:04.657338 | orchestrator | 2025-05-03 00:28:04 | INFO  | Please wait and do not abort execution. 2025-05-03 00:28:04.658010 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:28:04.658428 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:28:04.659052 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:28:04.659431 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:28:04.659810 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:28:04.660270 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:28:04.660804 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:28:04.661043 | orchestrator | 2025-05-03 00:28:04.661467 | orchestrator | Saturday 03 May 2025 00:28:04 +0000 (0:00:05.026) 0:00:38.535 ********** 2025-05-03 00:28:04.661839 | orchestrator | =============================================================================== 2025-05-03 00:28:04.662221 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.22s 2025-05-03 00:28:04.662682 | orchestrator | Install required packages (Debian) -------------------------------------- 7.11s 2025-05-03 00:28:04.663047 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.03s 2025-05-03 00:28:04.663388 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2025-05-03 00:28:04.663812 | orchestrator | Create custom facts directory ------------------------------------------- 2.39s 2025-05-03 00:28:04.664171 | orchestrator | Copy fact file ---------------------------------------------------------- 2.03s 2025-05-03 00:28:04.664292 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.19s 2025-05-03 00:28:04.664637 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-05-03 00:28:04.665112 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.98s 2025-05-03 00:28:04.665477 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2025-05-03 00:28:04.665752 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2025-05-03 00:28:04.666076 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2025-05-03 00:28:04.666428 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.14s 2025-05-03 00:28:04.666677 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-05-03 00:28:04.667076 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.12s 2025-05-03 00:28:04.667330 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-05-03 00:28:04.667711 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-05-03 00:28:04.667907 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-05-03 00:28:05.154389 | orchestrator | + osism apply bootstrap 2025-05-03 00:28:06.518824 | orchestrator | 2025-05-03 00:28:06 | INFO  | Task e5a4d0dc-a5cb-4615-a091-95b15585f861 (bootstrap) was prepared for execution. 2025-05-03 00:28:09.572405 | orchestrator | 2025-05-03 00:28:06 | INFO  | It takes a moment until task e5a4d0dc-a5cb-4615-a091-95b15585f861 (bootstrap) has been started and output is visible here. 2025-05-03 00:28:09.572523 | orchestrator | 2025-05-03 00:28:09.572568 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-03 00:28:09.573302 | orchestrator | 2025-05-03 00:28:09.574332 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-03 00:28:09.575076 | orchestrator | Saturday 03 May 2025 00:28:09 +0000 (0:00:00.103) 0:00:00.103 ********** 2025-05-03 00:28:09.642883 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:09.671345 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:09.695854 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:09.723758 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:09.801515 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:09.805755 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:09.807081 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:09.807724 | orchestrator | 2025-05-03 00:28:09.808729 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-03 00:28:09.809393 | orchestrator | 2025-05-03 00:28:09.810127 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-03 00:28:09.812597 | orchestrator | Saturday 03 May 2025 00:28:09 +0000 (0:00:00.231) 0:00:00.334 ********** 2025-05-03 00:28:13.453836 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:13.454531 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:13.454916 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:13.455140 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:13.456038 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:13.456211 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:13.456959 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:13.458201 | orchestrator | 2025-05-03 00:28:13.459092 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-03 00:28:13.460069 | orchestrator | 2025-05-03 00:28:13.460692 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-03 00:28:13.461273 | orchestrator | Saturday 03 May 2025 00:28:13 +0000 (0:00:03.652) 0:00:03.987 ********** 2025-05-03 00:28:13.544811 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-03 00:28:13.545041 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-03 00:28:13.545125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-03 00:28:13.581694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:28:13.583590 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-03 00:28:13.583727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:28:13.584324 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-03 00:28:13.633028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:28:13.633454 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:28:13.633625 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-03 00:28:13.633981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:28:13.634255 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:28:13.634483 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-03 00:28:13.634764 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-03 00:28:13.635092 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:28:13.881351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:28:13.881789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-03 00:28:13.882596 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:28:13.883249 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:28:13.884660 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-03 00:28:13.885538 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:28:13.886504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:28:13.887501 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:28:13.888481 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:28:13.889488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:28:13.889840 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-03 00:28:13.890444 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:13.890863 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-03 00:28:13.891663 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:28:13.892076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:28:13.892694 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:28:13.893294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:28:13.893824 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-03 00:28:13.894155 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:28:13.894600 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-03 00:28:13.895026 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:28:13.895438 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:28:13.896216 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-03 00:28:13.896602 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-03 00:28:13.896848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:28:13.897276 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:28:13.897627 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:28:13.898073 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-03 00:28:13.898436 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-03 00:28:13.898759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:28:13.899243 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-03 00:28:13.899780 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-03 00:28:13.900035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:28:13.900365 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:28:13.900979 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-03 00:28:13.901169 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-03 00:28:13.901462 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-03 00:28:13.901728 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-03 00:28:13.902001 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:28:13.902393 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-03 00:28:13.902662 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:28:13.906233 | orchestrator | 2025-05-03 00:28:13.906336 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-03 00:28:13.906636 | orchestrator | 2025-05-03 00:28:13.907123 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-03 00:28:13.907682 | orchestrator | Saturday 03 May 2025 00:28:13 +0000 (0:00:00.425) 0:00:04.413 ********** 2025-05-03 00:28:13.950819 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:13.975895 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:13.996200 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:14.022624 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:14.087520 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:14.087697 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:14.088685 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:14.088825 | orchestrator | 2025-05-03 00:28:14.089199 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-03 00:28:14.090721 | orchestrator | Saturday 03 May 2025 00:28:14 +0000 (0:00:00.207) 0:00:04.620 ********** 2025-05-03 00:28:15.270483 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:15.272329 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:15.272536 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:15.272607 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:15.272642 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:15.272663 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:15.273092 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:15.273535 | orchestrator | 2025-05-03 00:28:15.274066 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-03 00:28:15.274545 | orchestrator | Saturday 03 May 2025 00:28:15 +0000 (0:00:01.181) 0:00:05.801 ********** 2025-05-03 00:28:16.410383 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:16.411195 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:16.412479 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:16.413400 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:16.413430 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:16.413762 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:16.414222 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:16.414633 | orchestrator | 2025-05-03 00:28:16.414952 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-03 00:28:16.415457 | orchestrator | Saturday 03 May 2025 00:28:16 +0000 (0:00:01.140) 0:00:06.942 ********** 2025-05-03 00:28:16.662558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:28:16.663234 | orchestrator | 2025-05-03 00:28:16.668157 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-03 00:28:16.668872 | orchestrator | Saturday 03 May 2025 00:28:16 +0000 (0:00:00.252) 0:00:07.195 ********** 2025-05-03 00:28:18.693699 | orchestrator | changed: [testbed-manager] 2025-05-03 00:28:18.693884 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:18.694411 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:28:18.695782 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:28:18.699319 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:28:18.701093 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:18.701957 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:18.703058 | orchestrator | 2025-05-03 00:28:18.703785 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-03 00:28:18.703853 | orchestrator | Saturday 03 May 2025 00:28:18 +0000 (0:00:02.030) 0:00:09.225 ********** 2025-05-03 00:28:18.758607 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:18.932268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:28:18.933183 | orchestrator | 2025-05-03 00:28:18.936545 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-03 00:28:19.982966 | orchestrator | Saturday 03 May 2025 00:28:18 +0000 (0:00:00.239) 0:00:09.464 ********** 2025-05-03 00:28:19.983107 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:28:19.983940 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:28:19.983966 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:28:19.983981 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:19.983996 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:19.984010 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:19.984030 | orchestrator | 2025-05-03 00:28:19.985400 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-03 00:28:19.985458 | orchestrator | Saturday 03 May 2025 00:28:19 +0000 (0:00:01.044) 0:00:10.508 ********** 2025-05-03 00:28:20.038596 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:20.565479 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:20.565733 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:28:20.565763 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:28:20.565785 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:20.567167 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:20.568015 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:28:20.568977 | orchestrator | 2025-05-03 00:28:20.569958 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-03 00:28:20.570351 | orchestrator | Saturday 03 May 2025 00:28:20 +0000 (0:00:00.585) 0:00:11.094 ********** 2025-05-03 00:28:20.656588 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:28:20.681065 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:28:20.700454 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:28:20.998799 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:28:21.001220 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:28:21.001321 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:28:21.002213 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:21.004514 | orchestrator | 2025-05-03 00:28:21.004689 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-03 00:28:21.005693 | orchestrator | Saturday 03 May 2025 00:28:20 +0000 (0:00:00.425) 0:00:11.520 ********** 2025-05-03 00:28:21.074524 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:21.097715 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:28:21.128342 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:28:21.153383 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:28:21.214897 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:28:21.215494 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:28:21.216792 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:28:21.219447 | orchestrator | 2025-05-03 00:28:21.220485 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-03 00:28:21.221497 | orchestrator | Saturday 03 May 2025 00:28:21 +0000 (0:00:00.226) 0:00:11.746 ********** 2025-05-03 00:28:21.554611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:28:21.556747 | orchestrator | 2025-05-03 00:28:21.558114 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-03 00:28:21.560086 | orchestrator | Saturday 03 May 2025 00:28:21 +0000 (0:00:00.337) 0:00:12.084 ********** 2025-05-03 00:28:21.882400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:28:21.882569 | orchestrator | 2025-05-03 00:28:21.883187 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-03 00:28:21.884199 | orchestrator | Saturday 03 May 2025 00:28:21 +0000 (0:00:00.329) 0:00:12.413 ********** 2025-05-03 00:28:23.096373 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:23.097749 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:23.098005 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:23.098117 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:23.098772 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:23.099095 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:23.099648 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:23.100071 | orchestrator | 2025-05-03 00:28:23.100889 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-03 00:28:23.101195 | orchestrator | Saturday 03 May 2025 00:28:23 +0000 (0:00:01.195) 0:00:13.608 ********** 2025-05-03 00:28:23.148390 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:23.176376 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:28:23.196836 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:28:23.220177 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:28:23.279540 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:28:23.280068 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:28:23.281546 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:28:23.282436 | orchestrator | 2025-05-03 00:28:23.283148 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-03 00:28:23.283765 | orchestrator | Saturday 03 May 2025 00:28:23 +0000 (0:00:00.204) 0:00:13.812 ********** 2025-05-03 00:28:23.854556 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:23.857711 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:23.859031 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:23.859071 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:23.859628 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:23.860544 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:23.861091 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:23.862082 | orchestrator | 2025-05-03 00:28:23.862713 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-03 00:28:23.863703 | orchestrator | Saturday 03 May 2025 00:28:23 +0000 (0:00:00.573) 0:00:14.385 ********** 2025-05-03 00:28:23.954719 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:23.980261 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:28:24.003396 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:28:24.083435 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:28:24.084478 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:28:24.084903 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:28:24.084965 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:28:24.085759 | orchestrator | 2025-05-03 00:28:24.086610 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-03 00:28:24.087024 | orchestrator | Saturday 03 May 2025 00:28:24 +0000 (0:00:00.229) 0:00:14.615 ********** 2025-05-03 00:28:24.617064 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:24.617277 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:28:24.617907 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:28:24.618075 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:28:24.618773 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:24.619182 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:24.619761 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:24.620322 | orchestrator | 2025-05-03 00:28:24.620980 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-03 00:28:24.621673 | orchestrator | Saturday 03 May 2025 00:28:24 +0000 (0:00:00.533) 0:00:15.149 ********** 2025-05-03 00:28:25.708583 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:25.709948 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:28:25.710119 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:25.711773 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:28:25.713166 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:25.714181 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:28:25.714978 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:25.715891 | orchestrator | 2025-05-03 00:28:25.716605 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-03 00:28:25.717260 | orchestrator | Saturday 03 May 2025 00:28:25 +0000 (0:00:01.088) 0:00:16.238 ********** 2025-05-03 00:28:26.893886 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:26.894177 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:26.895350 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:26.895492 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:26.895989 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:26.896260 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:26.897318 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:26.898432 | orchestrator | 2025-05-03 00:28:26.898783 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-03 00:28:26.899246 | orchestrator | Saturday 03 May 2025 00:28:26 +0000 (0:00:01.187) 0:00:17.425 ********** 2025-05-03 00:28:27.235448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:28:27.236325 | orchestrator | 2025-05-03 00:28:27.236755 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-03 00:28:27.237784 | orchestrator | Saturday 03 May 2025 00:28:27 +0000 (0:00:00.340) 0:00:17.766 ********** 2025-05-03 00:28:27.310467 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:28.687045 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:28:28.687572 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:28:28.688115 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:28.688707 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:28:28.689738 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:28.690449 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:28.691190 | orchestrator | 2025-05-03 00:28:28.691806 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-03 00:28:28.692619 | orchestrator | Saturday 03 May 2025 00:28:28 +0000 (0:00:01.450) 0:00:19.217 ********** 2025-05-03 00:28:28.774285 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:28.803815 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:28.836278 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:28.861541 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:28.925601 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:28.927790 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:28.928370 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:28.928407 | orchestrator | 2025-05-03 00:28:28.928979 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-03 00:28:28.929397 | orchestrator | Saturday 03 May 2025 00:28:28 +0000 (0:00:00.233) 0:00:19.450 ********** 2025-05-03 00:28:28.991790 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:29.011852 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:29.036754 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:29.067441 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:29.155589 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:29.156351 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:29.157397 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:29.158629 | orchestrator | 2025-05-03 00:28:29.159055 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-03 00:28:29.159822 | orchestrator | Saturday 03 May 2025 00:28:29 +0000 (0:00:00.235) 0:00:19.686 ********** 2025-05-03 00:28:29.229227 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:29.257351 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:29.279548 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:29.304694 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:29.378315 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:29.379358 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:29.381019 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:29.381835 | orchestrator | 2025-05-03 00:28:29.382717 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-03 00:28:29.383655 | orchestrator | Saturday 03 May 2025 00:28:29 +0000 (0:00:00.225) 0:00:19.911 ********** 2025-05-03 00:28:29.663822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:28:29.664431 | orchestrator | 2025-05-03 00:28:29.665397 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-03 00:28:29.670636 | orchestrator | Saturday 03 May 2025 00:28:29 +0000 (0:00:00.282) 0:00:20.193 ********** 2025-05-03 00:28:30.217754 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:30.218312 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:30.218354 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:30.218370 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:30.218394 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:30.218790 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:30.219308 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:30.219913 | orchestrator | 2025-05-03 00:28:30.220477 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-03 00:28:30.221022 | orchestrator | Saturday 03 May 2025 00:28:30 +0000 (0:00:00.553) 0:00:20.747 ********** 2025-05-03 00:28:30.301157 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:30.327814 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:28:30.358302 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:28:30.376561 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:28:30.446856 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:28:30.448033 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:28:30.448706 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:28:30.450190 | orchestrator | 2025-05-03 00:28:30.450341 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-03 00:28:30.450834 | orchestrator | Saturday 03 May 2025 00:28:30 +0000 (0:00:00.233) 0:00:20.980 ********** 2025-05-03 00:28:31.506894 | orchestrator | changed: [testbed-manager] 2025-05-03 00:28:31.507350 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:31.507712 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:31.508272 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:31.508959 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:31.509580 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:31.510388 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:31.511132 | orchestrator | 2025-05-03 00:28:31.512094 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-03 00:28:31.512640 | orchestrator | Saturday 03 May 2025 00:28:31 +0000 (0:00:01.058) 0:00:22.038 ********** 2025-05-03 00:28:32.053368 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:32.054585 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:32.054676 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:32.054699 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:32.054726 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:32.054753 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:32.056121 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:32.057152 | orchestrator | 2025-05-03 00:28:32.058129 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-03 00:28:32.059229 | orchestrator | Saturday 03 May 2025 00:28:32 +0000 (0:00:00.545) 0:00:22.584 ********** 2025-05-03 00:28:33.180646 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:33.180811 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:33.181392 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:33.183321 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:33.185913 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:33.186602 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:33.187604 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:33.188334 | orchestrator | 2025-05-03 00:28:33.188665 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-03 00:28:33.189275 | orchestrator | Saturday 03 May 2025 00:28:33 +0000 (0:00:01.126) 0:00:23.710 ********** 2025-05-03 00:28:46.725558 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:46.725790 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:46.725817 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:46.725840 | orchestrator | changed: [testbed-manager] 2025-05-03 00:28:46.727205 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:46.731204 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:46.731515 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:46.732782 | orchestrator | 2025-05-03 00:28:46.733595 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-03 00:28:46.733671 | orchestrator | Saturday 03 May 2025 00:28:46 +0000 (0:00:13.540) 0:00:37.251 ********** 2025-05-03 00:28:46.800193 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:46.826137 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:46.849948 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:46.876649 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:46.926770 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:46.927480 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:46.928449 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:46.929182 | orchestrator | 2025-05-03 00:28:46.929900 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-03 00:28:46.930432 | orchestrator | Saturday 03 May 2025 00:28:46 +0000 (0:00:00.209) 0:00:37.460 ********** 2025-05-03 00:28:46.995892 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:47.021742 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:47.053678 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:47.067367 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:47.121823 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:47.122104 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:47.125198 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:47.200716 | orchestrator | 2025-05-03 00:28:47.200799 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-03 00:28:47.200808 | orchestrator | Saturday 03 May 2025 00:28:47 +0000 (0:00:00.194) 0:00:37.654 ********** 2025-05-03 00:28:47.200825 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:47.225868 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:47.249388 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:47.277542 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:47.328361 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:47.328519 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:47.329294 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:47.329801 | orchestrator | 2025-05-03 00:28:47.330708 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-03 00:28:47.331720 | orchestrator | Saturday 03 May 2025 00:28:47 +0000 (0:00:00.207) 0:00:37.861 ********** 2025-05-03 00:28:47.620374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:28:47.620602 | orchestrator | 2025-05-03 00:28:47.621423 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-03 00:28:47.622158 | orchestrator | Saturday 03 May 2025 00:28:47 +0000 (0:00:00.291) 0:00:38.153 ********** 2025-05-03 00:28:49.281455 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:49.281694 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:49.282640 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:49.283976 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:49.285754 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:49.286503 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:49.287152 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:49.287747 | orchestrator | 2025-05-03 00:28:49.288627 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-03 00:28:49.289022 | orchestrator | Saturday 03 May 2025 00:28:49 +0000 (0:00:01.658) 0:00:39.812 ********** 2025-05-03 00:28:50.332111 | orchestrator | changed: [testbed-manager] 2025-05-03 00:28:50.332232 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:28:50.335480 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:28:50.336425 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:28:50.337342 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:50.338713 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:50.339270 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:50.340213 | orchestrator | 2025-05-03 00:28:50.341025 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-03 00:28:50.341957 | orchestrator | Saturday 03 May 2025 00:28:50 +0000 (0:00:01.051) 0:00:40.863 ********** 2025-05-03 00:28:51.133675 | orchestrator | ok: [testbed-manager] 2025-05-03 00:28:51.134281 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:28:51.135461 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:28:51.136080 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:28:51.136966 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:28:51.137617 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:28:51.138445 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:28:51.138991 | orchestrator | 2025-05-03 00:28:51.139542 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-03 00:28:51.140266 | orchestrator | Saturday 03 May 2025 00:28:51 +0000 (0:00:00.802) 0:00:41.666 ********** 2025-05-03 00:28:51.433534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:28:51.433710 | orchestrator | 2025-05-03 00:28:51.435401 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-03 00:28:51.436742 | orchestrator | Saturday 03 May 2025 00:28:51 +0000 (0:00:00.299) 0:00:41.965 ********** 2025-05-03 00:28:52.510434 | orchestrator | changed: [testbed-manager] 2025-05-03 00:28:52.510974 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:28:52.512359 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:28:52.513491 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:28:52.515964 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:28:52.516684 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:28:52.517252 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:28:52.517887 | orchestrator | 2025-05-03 00:28:52.518795 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-03 00:28:52.519189 | orchestrator | Saturday 03 May 2025 00:28:52 +0000 (0:00:01.075) 0:00:43.041 ********** 2025-05-03 00:28:52.606245 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:28:52.630358 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:28:52.655663 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:28:52.813101 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:28:52.813262 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:28:52.813285 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:28:52.813305 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:28:52.814375 | orchestrator | 2025-05-03 00:28:52.816255 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-03 00:29:03.811335 | orchestrator | Saturday 03 May 2025 00:28:52 +0000 (0:00:00.303) 0:00:43.345 ********** 2025-05-03 00:29:03.811500 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:29:03.811550 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:29:03.811560 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:29:03.811569 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:29:03.811577 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:29:03.811586 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:29:03.811598 | orchestrator | changed: [testbed-manager] 2025-05-03 00:29:03.812115 | orchestrator | 2025-05-03 00:29:03.813070 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-03 00:29:03.813361 | orchestrator | Saturday 03 May 2025 00:29:03 +0000 (0:00:10.991) 0:00:54.336 ********** 2025-05-03 00:29:04.652007 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:04.652609 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:04.653628 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:04.653856 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:04.657514 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:04.658241 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:04.658647 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:04.659108 | orchestrator | 2025-05-03 00:29:04.660689 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-03 00:29:04.661293 | orchestrator | Saturday 03 May 2025 00:29:04 +0000 (0:00:00.848) 0:00:55.184 ********** 2025-05-03 00:29:06.461636 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:06.463071 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:06.463810 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:06.463860 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:06.464429 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:06.464775 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:06.465660 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:06.466504 | orchestrator | 2025-05-03 00:29:06.468138 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-03 00:29:06.470329 | orchestrator | Saturday 03 May 2025 00:29:06 +0000 (0:00:01.809) 0:00:56.994 ********** 2025-05-03 00:29:06.550843 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:06.576676 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:06.602991 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:06.627121 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:06.692097 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:06.693343 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:06.696412 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:06.697258 | orchestrator | 2025-05-03 00:29:06.697495 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-03 00:29:06.697713 | orchestrator | Saturday 03 May 2025 00:29:06 +0000 (0:00:00.230) 0:00:57.224 ********** 2025-05-03 00:29:06.761793 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:06.784185 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:06.810442 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:06.837818 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:06.895963 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:06.897247 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:06.897529 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:06.899781 | orchestrator | 2025-05-03 00:29:06.900062 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-03 00:29:06.900315 | orchestrator | Saturday 03 May 2025 00:29:06 +0000 (0:00:00.203) 0:00:57.428 ********** 2025-05-03 00:29:07.191077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:29:07.193726 | orchestrator | 2025-05-03 00:29:08.692580 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-03 00:29:08.692698 | orchestrator | Saturday 03 May 2025 00:29:07 +0000 (0:00:00.295) 0:00:57.723 ********** 2025-05-03 00:29:08.692735 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:08.693243 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:08.694572 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:08.695707 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:08.696472 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:08.697069 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:08.698610 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:08.699734 | orchestrator | 2025-05-03 00:29:08.700343 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-03 00:29:08.700712 | orchestrator | Saturday 03 May 2025 00:29:08 +0000 (0:00:01.499) 0:00:59.222 ********** 2025-05-03 00:29:09.276608 | orchestrator | changed: [testbed-manager] 2025-05-03 00:29:09.277601 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:29:09.278595 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:29:09.280072 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:29:09.281103 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:29:09.282282 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:29:09.282953 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:29:09.283696 | orchestrator | 2025-05-03 00:29:09.284540 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-03 00:29:09.285021 | orchestrator | Saturday 03 May 2025 00:29:09 +0000 (0:00:00.584) 0:00:59.807 ********** 2025-05-03 00:29:09.350226 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:09.378766 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:09.401724 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:09.428147 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:09.482551 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:09.483648 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:09.484675 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:09.485713 | orchestrator | 2025-05-03 00:29:09.486311 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-03 00:29:09.487078 | orchestrator | Saturday 03 May 2025 00:29:09 +0000 (0:00:00.208) 0:01:00.016 ********** 2025-05-03 00:29:10.562329 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:10.563410 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:10.563454 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:10.564034 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:10.565272 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:10.566383 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:10.566684 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:10.567181 | orchestrator | 2025-05-03 00:29:10.568229 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-03 00:29:10.569059 | orchestrator | Saturday 03 May 2025 00:29:10 +0000 (0:00:01.077) 0:01:01.093 ********** 2025-05-03 00:29:12.147809 | orchestrator | changed: [testbed-manager] 2025-05-03 00:29:12.148984 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:29:12.150766 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:29:12.151796 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:29:12.152863 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:29:12.153663 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:29:12.154494 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:12.155222 | orchestrator | 2025-05-03 00:29:12.155852 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-03 00:29:12.156694 | orchestrator | Saturday 03 May 2025 00:29:12 +0000 (0:00:01.584) 0:01:02.678 ********** 2025-05-03 00:29:20.332752 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:20.333571 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:20.334314 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:20.334449 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:20.335115 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:20.336500 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:20.336989 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:29:20.337737 | orchestrator | 2025-05-03 00:29:20.338572 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-03 00:29:20.338750 | orchestrator | Saturday 03 May 2025 00:29:20 +0000 (0:00:08.185) 0:01:10.863 ********** 2025-05-03 00:29:59.668350 | orchestrator | ok: [testbed-manager] 2025-05-03 00:29:59.670139 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:29:59.670180 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:29:59.670187 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:29:59.670193 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:29:59.670205 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:29:59.671366 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:29:59.672190 | orchestrator | 2025-05-03 00:29:59.672238 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-03 00:29:59.672265 | orchestrator | Saturday 03 May 2025 00:29:59 +0000 (0:00:39.329) 0:01:50.193 ********** 2025-05-03 00:31:21.147723 | orchestrator | changed: [testbed-manager] 2025-05-03 00:31:21.148082 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:31:21.148152 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:31:21.148179 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:31:21.148204 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:31:21.148229 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:31:21.148253 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:31:21.148339 | orchestrator | 2025-05-03 00:31:21.148432 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-03 00:31:21.148468 | orchestrator | Saturday 03 May 2025 00:31:21 +0000 (0:01:21.476) 0:03:11.671 ********** 2025-05-03 00:31:22.668208 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:31:22.669479 | orchestrator | ok: [testbed-manager] 2025-05-03 00:31:22.669698 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:31:22.670604 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:31:22.671775 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:31:22.672185 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:31:22.672779 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:31:22.673491 | orchestrator | 2025-05-03 00:31:22.674545 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-03 00:31:22.674803 | orchestrator | Saturday 03 May 2025 00:31:22 +0000 (0:00:01.527) 0:03:13.198 ********** 2025-05-03 00:31:35.272857 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:31:35.273158 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:31:35.273175 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:31:35.273186 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:31:35.273233 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:31:35.273245 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:31:35.274864 | orchestrator | changed: [testbed-manager] 2025-05-03 00:31:35.275929 | orchestrator | 2025-05-03 00:31:35.276616 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-03 00:31:35.277040 | orchestrator | Saturday 03 May 2025 00:31:35 +0000 (0:00:12.601) 0:03:25.800 ********** 2025-05-03 00:31:35.634289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-03 00:31:35.635030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-03 00:31:35.636055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-03 00:31:35.639006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-03 00:31:35.639782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-03 00:31:35.639811 | orchestrator | 2025-05-03 00:31:35.639833 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-03 00:31:35.640062 | orchestrator | Saturday 03 May 2025 00:31:35 +0000 (0:00:00.365) 0:03:26.166 ********** 2025-05-03 00:31:35.697867 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-03 00:31:35.698146 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-03 00:31:35.727822 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:31:35.776453 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-03 00:31:35.776943 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:31:35.777776 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-03 00:31:35.802223 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:31:35.828326 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:31:36.356033 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-03 00:31:36.357479 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-03 00:31:36.358359 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-03 00:31:36.359126 | orchestrator | 2025-05-03 00:31:36.359447 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-03 00:31:36.359977 | orchestrator | Saturday 03 May 2025 00:31:36 +0000 (0:00:00.720) 0:03:26.887 ********** 2025-05-03 00:31:36.410596 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-03 00:31:36.411362 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-03 00:31:36.473619 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-03 00:31:36.474570 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-03 00:31:36.474595 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-03 00:31:36.474608 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-03 00:31:36.474620 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-03 00:31:36.474633 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-03 00:31:36.474662 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-03 00:31:36.475219 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-03 00:31:36.475440 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-03 00:31:36.475467 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-03 00:31:36.475955 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-03 00:31:36.476227 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-03 00:31:36.477622 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-03 00:31:36.477685 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-03 00:31:36.478135 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-03 00:31:36.478758 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-03 00:31:36.479085 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-03 00:31:36.479352 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-03 00:31:36.479952 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-03 00:31:36.480146 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-03 00:31:36.480686 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-03 00:31:36.481065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-03 00:31:36.481344 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-03 00:31:36.481948 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-03 00:31:36.482187 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-03 00:31:36.482531 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-03 00:31:36.509842 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:31:36.510070 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-03 00:31:36.510373 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-03 00:31:36.511091 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-03 00:31:36.551371 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-03 00:31:36.551849 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:31:36.551992 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-03 00:31:36.552910 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-03 00:31:36.553322 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-03 00:31:36.553525 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-03 00:31:36.554168 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-03 00:31:36.554512 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-03 00:31:36.554873 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-03 00:31:36.555275 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-03 00:31:36.582137 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:31:40.084683 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:31:40.085077 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-03 00:31:40.086644 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-03 00:31:40.087068 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-03 00:31:40.088666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-03 00:31:40.090184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-03 00:31:40.091410 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-03 00:31:40.092425 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-03 00:31:40.093072 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-03 00:31:40.093999 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-03 00:31:40.095268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-03 00:31:40.097950 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-03 00:31:40.098284 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-03 00:31:40.099184 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-03 00:31:40.099699 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-03 00:31:40.100395 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-03 00:31:40.100942 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-03 00:31:40.102191 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-03 00:31:40.103198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-03 00:31:40.103517 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-03 00:31:40.103970 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-03 00:31:40.104667 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-03 00:31:40.105112 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-03 00:31:40.105502 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-03 00:31:40.106068 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-03 00:31:40.106439 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-03 00:31:40.106821 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-03 00:31:40.107264 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-03 00:31:40.107635 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-03 00:31:40.108043 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-03 00:31:40.108728 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-03 00:31:40.112390 | orchestrator | 2025-05-03 00:31:40.113383 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-03 00:31:40.113841 | orchestrator | Saturday 03 May 2025 00:31:40 +0000 (0:00:03.729) 0:03:30.616 ********** 2025-05-03 00:31:40.689096 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-03 00:31:40.689390 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-03 00:31:40.690119 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-03 00:31:40.691311 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-03 00:31:40.692145 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-03 00:31:40.692367 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-03 00:31:40.693103 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-03 00:31:40.693975 | orchestrator | 2025-05-03 00:31:40.694728 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-03 00:31:40.695487 | orchestrator | Saturday 03 May 2025 00:31:40 +0000 (0:00:00.605) 0:03:31.221 ********** 2025-05-03 00:31:40.744267 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-03 00:31:40.770233 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:31:40.854691 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-03 00:31:41.187966 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-03 00:31:41.188142 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:31:41.188221 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:31:41.189147 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-03 00:31:41.189904 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:31:41.190492 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-03 00:31:41.191099 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-03 00:31:41.191613 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-03 00:31:41.191919 | orchestrator | 2025-05-03 00:31:41.192340 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-03 00:31:41.192724 | orchestrator | Saturday 03 May 2025 00:31:41 +0000 (0:00:00.496) 0:03:31.718 ********** 2025-05-03 00:31:41.241309 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-03 00:31:41.282389 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:31:41.373785 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-03 00:31:41.399961 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:31:42.788484 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-03 00:31:42.789340 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:31:42.790439 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-03 00:31:42.794081 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:31:42.797464 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-03 00:31:42.797499 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-03 00:31:42.797521 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-03 00:31:42.799999 | orchestrator | 2025-05-03 00:31:42.801175 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-03 00:31:42.880553 | orchestrator | Saturday 03 May 2025 00:31:42 +0000 (0:00:01.599) 0:03:33.317 ********** 2025-05-03 00:31:42.880677 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:31:42.910254 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:31:42.937268 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:31:42.963785 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:31:43.088964 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:31:43.090179 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:31:43.091545 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:31:43.092081 | orchestrator | 2025-05-03 00:31:43.092337 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-03 00:31:43.093124 | orchestrator | Saturday 03 May 2025 00:31:43 +0000 (0:00:00.298) 0:03:33.616 ********** 2025-05-03 00:31:48.781333 | orchestrator | ok: [testbed-manager] 2025-05-03 00:31:48.781788 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:31:48.781833 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:31:48.782707 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:31:48.784481 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:31:48.785050 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:31:48.785594 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:31:48.785965 | orchestrator | 2025-05-03 00:31:48.786743 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-03 00:31:48.858995 | orchestrator | Saturday 03 May 2025 00:31:48 +0000 (0:00:05.698) 0:03:39.314 ********** 2025-05-03 00:31:48.859117 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-03 00:31:48.859548 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-03 00:31:48.888630 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:31:48.928334 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-03 00:31:48.928690 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:31:48.930090 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-03 00:31:48.959327 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:31:48.995205 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:31:48.996087 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-03 00:31:48.996916 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-03 00:31:49.052171 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:31:49.052963 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:31:49.054097 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-03 00:31:49.055007 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:31:49.055821 | orchestrator | 2025-05-03 00:31:49.057114 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-03 00:31:49.057386 | orchestrator | Saturday 03 May 2025 00:31:49 +0000 (0:00:00.271) 0:03:39.585 ********** 2025-05-03 00:31:50.065734 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-03 00:31:50.066126 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-03 00:31:50.066236 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-03 00:31:50.066745 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-03 00:31:50.067256 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-03 00:31:50.069232 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-03 00:31:50.069498 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-03 00:31:50.072048 | orchestrator | 2025-05-03 00:31:50.072476 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-03 00:31:50.073816 | orchestrator | Saturday 03 May 2025 00:31:50 +0000 (0:00:01.012) 0:03:40.597 ********** 2025-05-03 00:31:50.475199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:31:50.478235 | orchestrator | 2025-05-03 00:31:51.708536 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-03 00:31:51.708665 | orchestrator | Saturday 03 May 2025 00:31:50 +0000 (0:00:00.408) 0:03:41.006 ********** 2025-05-03 00:31:51.708704 | orchestrator | ok: [testbed-manager] 2025-05-03 00:31:51.709537 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:31:51.710069 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:31:51.711465 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:31:51.712064 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:31:51.712652 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:31:51.713533 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:31:51.714341 | orchestrator | 2025-05-03 00:31:51.714926 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-03 00:31:51.715670 | orchestrator | Saturday 03 May 2025 00:31:51 +0000 (0:00:01.234) 0:03:42.240 ********** 2025-05-03 00:31:52.336527 | orchestrator | ok: [testbed-manager] 2025-05-03 00:31:52.336739 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:31:52.338249 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:31:52.339218 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:31:52.339638 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:31:52.339679 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:31:52.341191 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:31:52.342261 | orchestrator | 2025-05-03 00:31:52.342642 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-03 00:31:52.343259 | orchestrator | Saturday 03 May 2025 00:31:52 +0000 (0:00:00.626) 0:03:42.867 ********** 2025-05-03 00:31:53.001876 | orchestrator | changed: [testbed-manager] 2025-05-03 00:31:53.006256 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:31:53.006312 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:31:53.544167 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:31:53.544288 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:31:53.544308 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:31:53.544323 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:31:53.544338 | orchestrator | 2025-05-03 00:31:53.544354 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-03 00:31:53.544411 | orchestrator | Saturday 03 May 2025 00:31:52 +0000 (0:00:00.668) 0:03:43.535 ********** 2025-05-03 00:31:53.544445 | orchestrator | ok: [testbed-manager] 2025-05-03 00:31:53.544514 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:31:53.544536 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:31:53.545046 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:31:53.545918 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:31:53.547293 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:31:53.547526 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:31:53.548016 | orchestrator | 2025-05-03 00:31:53.548440 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-03 00:31:53.548694 | orchestrator | Saturday 03 May 2025 00:31:53 +0000 (0:00:00.540) 0:03:44.076 ********** 2025-05-03 00:31:54.401578 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746230668.6727622, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.402149 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746230676.2900295, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.402472 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746230682.9627595, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.402958 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746230684.9701526, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.403927 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746230683.8095162, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.404755 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746230677.1710894, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.404831 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746230685.170147, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.406095 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746230697.4129548, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.406356 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746230608.431437, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.406972 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746230613.159864, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.407692 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746230603.1723135, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.407934 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746230612.950774, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.408290 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746230611.8991702, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.410104 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746230604.4272351, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 00:31:54.410341 | orchestrator | 2025-05-03 00:31:54.410525 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-03 00:31:54.410863 | orchestrator | Saturday 03 May 2025 00:31:54 +0000 (0:00:00.857) 0:03:44.933 ********** 2025-05-03 00:31:55.518367 | orchestrator | changed: [testbed-manager] 2025-05-03 00:31:55.518580 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:31:55.518982 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:31:55.519769 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:31:55.520687 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:31:55.521224 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:31:55.522522 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:31:55.522821 | orchestrator | 2025-05-03 00:31:55.528306 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-03 00:31:55.529024 | orchestrator | Saturday 03 May 2025 00:31:55 +0000 (0:00:01.115) 0:03:46.049 ********** 2025-05-03 00:31:56.642581 | orchestrator | changed: [testbed-manager] 2025-05-03 00:31:56.643107 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:31:56.644076 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:31:56.648494 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:31:56.650192 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:31:56.651970 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:31:56.653094 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:31:56.654144 | orchestrator | 2025-05-03 00:31:56.655101 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-03 00:31:56.655438 | orchestrator | Saturday 03 May 2025 00:31:56 +0000 (0:00:01.125) 0:03:47.174 ********** 2025-05-03 00:31:56.739792 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:31:56.771096 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:31:56.801823 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:31:56.834400 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:31:56.888662 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:31:56.888857 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:31:56.890352 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:31:56.890766 | orchestrator | 2025-05-03 00:31:56.891417 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-03 00:31:56.892536 | orchestrator | Saturday 03 May 2025 00:31:56 +0000 (0:00:00.248) 0:03:47.422 ********** 2025-05-03 00:31:57.600209 | orchestrator | ok: [testbed-manager] 2025-05-03 00:31:57.600472 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:31:57.602544 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:31:57.603638 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:31:57.605192 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:31:57.605955 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:31:57.607086 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:31:57.608325 | orchestrator | 2025-05-03 00:31:57.609304 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-03 00:31:57.610167 | orchestrator | Saturday 03 May 2025 00:31:57 +0000 (0:00:00.706) 0:03:48.129 ********** 2025-05-03 00:31:57.996370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:31:57.996630 | orchestrator | 2025-05-03 00:31:57.997198 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-03 00:31:57.997551 | orchestrator | Saturday 03 May 2025 00:31:57 +0000 (0:00:00.399) 0:03:48.529 ********** 2025-05-03 00:32:05.587924 | orchestrator | ok: [testbed-manager] 2025-05-03 00:32:05.588159 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:32:05.588188 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:32:05.588210 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:32:05.590199 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:32:05.590400 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:32:05.591566 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:32:05.591826 | orchestrator | 2025-05-03 00:32:05.593351 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-03 00:32:06.727406 | orchestrator | Saturday 03 May 2025 00:32:05 +0000 (0:00:07.587) 0:03:56.116 ********** 2025-05-03 00:32:06.727589 | orchestrator | ok: [testbed-manager] 2025-05-03 00:32:06.727675 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:32:06.729109 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:32:06.729519 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:32:06.730654 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:32:06.731030 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:32:06.732079 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:32:06.732210 | orchestrator | 2025-05-03 00:32:06.732997 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-03 00:32:06.733816 | orchestrator | Saturday 03 May 2025 00:32:06 +0000 (0:00:01.140) 0:03:57.257 ********** 2025-05-03 00:32:07.752873 | orchestrator | ok: [testbed-manager] 2025-05-03 00:32:07.754732 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:32:07.756170 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:32:07.757478 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:32:07.758933 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:32:07.761336 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:32:07.762426 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:32:07.763207 | orchestrator | 2025-05-03 00:32:07.764134 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-03 00:32:07.764842 | orchestrator | Saturday 03 May 2025 00:32:07 +0000 (0:00:01.025) 0:03:58.282 ********** 2025-05-03 00:32:08.151822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:32:08.152456 | orchestrator | 2025-05-03 00:32:08.152981 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-03 00:32:08.153236 | orchestrator | Saturday 03 May 2025 00:32:08 +0000 (0:00:00.402) 0:03:58.684 ********** 2025-05-03 00:32:16.295933 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:32:16.296132 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:32:16.299551 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:32:16.300563 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:32:16.300836 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:32:16.301925 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:32:16.302327 | orchestrator | changed: [testbed-manager] 2025-05-03 00:32:16.303539 | orchestrator | 2025-05-03 00:32:16.304096 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-03 00:32:16.304828 | orchestrator | Saturday 03 May 2025 00:32:16 +0000 (0:00:08.141) 0:04:06.826 ********** 2025-05-03 00:32:16.891929 | orchestrator | changed: [testbed-manager] 2025-05-03 00:32:16.892632 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:32:16.893623 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:32:16.894171 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:32:16.895251 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:32:16.895483 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:32:16.895987 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:32:16.896686 | orchestrator | 2025-05-03 00:32:16.897105 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-03 00:32:16.897547 | orchestrator | Saturday 03 May 2025 00:32:16 +0000 (0:00:00.598) 0:04:07.424 ********** 2025-05-03 00:32:18.017573 | orchestrator | changed: [testbed-manager] 2025-05-03 00:32:18.018344 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:32:18.019620 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:32:18.020860 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:32:18.021654 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:32:18.022616 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:32:18.023506 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:32:18.024166 | orchestrator | 2025-05-03 00:32:18.024655 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-03 00:32:18.025412 | orchestrator | Saturday 03 May 2025 00:32:18 +0000 (0:00:01.122) 0:04:08.547 ********** 2025-05-03 00:32:19.060583 | orchestrator | changed: [testbed-manager] 2025-05-03 00:32:19.061120 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:32:19.062467 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:32:19.063233 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:32:19.064185 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:32:19.065035 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:32:19.065793 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:32:19.066432 | orchestrator | 2025-05-03 00:32:19.067041 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-03 00:32:19.067405 | orchestrator | Saturday 03 May 2025 00:32:19 +0000 (0:00:01.042) 0:04:09.590 ********** 2025-05-03 00:32:19.173965 | orchestrator | ok: [testbed-manager] 2025-05-03 00:32:19.213048 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:32:19.245786 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:32:19.294098 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:32:19.372311 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:32:19.373558 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:32:19.379282 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:32:19.380429 | orchestrator | 2025-05-03 00:32:19.381909 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-03 00:32:19.383331 | orchestrator | Saturday 03 May 2025 00:32:19 +0000 (0:00:00.309) 0:04:09.900 ********** 2025-05-03 00:32:19.451792 | orchestrator | ok: [testbed-manager] 2025-05-03 00:32:19.489996 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:32:19.561151 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:32:19.594305 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:32:19.679413 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:32:19.680481 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:32:19.680585 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:32:19.681651 | orchestrator | 2025-05-03 00:32:19.686088 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-03 00:32:19.687187 | orchestrator | Saturday 03 May 2025 00:32:19 +0000 (0:00:00.312) 0:04:10.213 ********** 2025-05-03 00:32:19.788966 | orchestrator | ok: [testbed-manager] 2025-05-03 00:32:19.839052 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:32:19.871375 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:32:19.908487 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:32:19.984998 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:32:19.985680 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:32:19.986517 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:32:19.987116 | orchestrator | 2025-05-03 00:32:19.987768 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-03 00:32:19.988687 | orchestrator | Saturday 03 May 2025 00:32:19 +0000 (0:00:00.303) 0:04:10.517 ********** 2025-05-03 00:32:25.857332 | orchestrator | ok: [testbed-manager] 2025-05-03 00:32:25.857951 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:32:25.858661 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:32:25.859366 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:32:25.861505 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:32:25.862389 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:32:25.863132 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:32:25.863978 | orchestrator | 2025-05-03 00:32:25.864605 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-03 00:32:25.865201 | orchestrator | Saturday 03 May 2025 00:32:25 +0000 (0:00:05.872) 0:04:16.389 ********** 2025-05-03 00:32:26.228488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:32:26.229304 | orchestrator | 2025-05-03 00:32:26.230291 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-03 00:32:26.231438 | orchestrator | Saturday 03 May 2025 00:32:26 +0000 (0:00:00.370) 0:04:16.760 ********** 2025-05-03 00:32:26.283519 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-03 00:32:26.284548 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-03 00:32:26.319166 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:32:26.320769 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-03 00:32:26.360420 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-03 00:32:26.413990 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:32:26.414536 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-03 00:32:26.414583 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-03 00:32:26.415842 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-03 00:32:26.416553 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-03 00:32:26.446626 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:32:26.490647 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:32:26.492192 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-03 00:32:26.493129 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-03 00:32:26.493930 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-03 00:32:26.559929 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-03 00:32:26.560742 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:32:26.561137 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:32:26.564764 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-03 00:32:26.565195 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-03 00:32:26.565782 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:32:26.566380 | orchestrator | 2025-05-03 00:32:26.567756 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-03 00:32:26.569020 | orchestrator | Saturday 03 May 2025 00:32:26 +0000 (0:00:00.333) 0:04:17.093 ********** 2025-05-03 00:32:26.954225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:32:26.954619 | orchestrator | 2025-05-03 00:32:26.955579 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-03 00:32:26.957480 | orchestrator | Saturday 03 May 2025 00:32:26 +0000 (0:00:00.391) 0:04:17.485 ********** 2025-05-03 00:32:27.022312 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-03 00:32:27.070284 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-03 00:32:27.071008 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:32:27.071925 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-03 00:32:27.106175 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:32:27.106596 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-03 00:32:27.137998 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:32:27.181220 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-03 00:32:27.181372 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:32:27.181988 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-03 00:32:27.252630 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:32:27.252990 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:32:27.254069 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-03 00:32:27.255430 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:32:27.255632 | orchestrator | 2025-05-03 00:32:27.255669 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-03 00:32:27.256335 | orchestrator | Saturday 03 May 2025 00:32:27 +0000 (0:00:00.300) 0:04:17.785 ********** 2025-05-03 00:32:27.650761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:32:27.654766 | orchestrator | 2025-05-03 00:32:27.655277 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-03 00:32:27.655790 | orchestrator | Saturday 03 May 2025 00:32:27 +0000 (0:00:00.395) 0:04:18.181 ********** 2025-05-03 00:33:00.358003 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:00.358276 | orchestrator | changed: [testbed-manager] 2025-05-03 00:33:00.358311 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:00.358327 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:00.358341 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:00.358356 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:00.358370 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:00.358384 | orchestrator | 2025-05-03 00:33:00.358406 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-03 00:33:07.259832 | orchestrator | Saturday 03 May 2025 00:33:00 +0000 (0:00:32.681) 0:04:50.864 ********** 2025-05-03 00:33:07.260110 | orchestrator | changed: [testbed-manager] 2025-05-03 00:33:07.260846 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:07.261192 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:07.267527 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:07.267720 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:07.267752 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:07.269026 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:07.269401 | orchestrator | 2025-05-03 00:33:07.269873 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-03 00:33:07.271544 | orchestrator | Saturday 03 May 2025 00:33:07 +0000 (0:00:06.926) 0:04:57.791 ********** 2025-05-03 00:33:14.353109 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:14.353293 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:14.354147 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:14.354954 | orchestrator | changed: [testbed-manager] 2025-05-03 00:33:14.356953 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:14.358101 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:14.359020 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:14.359477 | orchestrator | 2025-05-03 00:33:14.360383 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-03 00:33:14.361326 | orchestrator | Saturday 03 May 2025 00:33:14 +0000 (0:00:07.092) 0:05:04.883 ********** 2025-05-03 00:33:16.035455 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:16.035621 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:33:16.037981 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:33:16.038936 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:33:16.039895 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:33:16.040530 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:33:16.041182 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:33:16.041793 | orchestrator | 2025-05-03 00:33:16.042520 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-03 00:33:16.042903 | orchestrator | Saturday 03 May 2025 00:33:16 +0000 (0:00:01.682) 0:05:06.566 ********** 2025-05-03 00:33:21.540953 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:21.542890 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:21.542931 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:21.544416 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:21.545207 | orchestrator | changed: [testbed-manager] 2025-05-03 00:33:21.546229 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:21.547064 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:21.547686 | orchestrator | 2025-05-03 00:33:21.548583 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-03 00:33:21.549372 | orchestrator | Saturday 03 May 2025 00:33:21 +0000 (0:00:05.505) 0:05:12.071 ********** 2025-05-03 00:33:21.930336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:33:21.930504 | orchestrator | 2025-05-03 00:33:21.935180 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-03 00:33:22.693062 | orchestrator | Saturday 03 May 2025 00:33:21 +0000 (0:00:00.391) 0:05:12.462 ********** 2025-05-03 00:33:22.693200 | orchestrator | changed: [testbed-manager] 2025-05-03 00:33:22.697999 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:22.698121 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:22.698147 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:22.698205 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:22.699314 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:22.699692 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:22.700499 | orchestrator | 2025-05-03 00:33:22.701151 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-03 00:33:22.701672 | orchestrator | Saturday 03 May 2025 00:33:22 +0000 (0:00:00.760) 0:05:13.223 ********** 2025-05-03 00:33:24.325293 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:24.328028 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:33:24.328455 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:33:24.330776 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:33:24.331511 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:33:24.331606 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:33:24.331624 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:33:24.331638 | orchestrator | 2025-05-03 00:33:24.331664 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-03 00:33:24.332093 | orchestrator | Saturday 03 May 2025 00:33:24 +0000 (0:00:01.632) 0:05:14.855 ********** 2025-05-03 00:33:25.112449 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:25.112654 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:25.114094 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:25.114436 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:25.115282 | orchestrator | changed: [testbed-manager] 2025-05-03 00:33:25.116070 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:25.116526 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:25.117137 | orchestrator | 2025-05-03 00:33:25.118086 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-03 00:33:25.118299 | orchestrator | Saturday 03 May 2025 00:33:25 +0000 (0:00:00.788) 0:05:15.644 ********** 2025-05-03 00:33:25.186399 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:33:25.219111 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:33:25.251834 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:33:25.286194 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:33:25.317985 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:33:25.369915 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:33:25.370546 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:33:25.371535 | orchestrator | 2025-05-03 00:33:25.373153 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-03 00:33:25.373570 | orchestrator | Saturday 03 May 2025 00:33:25 +0000 (0:00:00.258) 0:05:15.902 ********** 2025-05-03 00:33:25.431441 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:33:25.463202 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:33:25.492816 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:33:25.552711 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:33:25.593301 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:33:25.783925 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:33:25.786366 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:33:25.786409 | orchestrator | 2025-05-03 00:33:25.907122 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-03 00:33:25.907251 | orchestrator | Saturday 03 May 2025 00:33:25 +0000 (0:00:00.411) 0:05:16.314 ********** 2025-05-03 00:33:25.907305 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:25.941213 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:33:25.974423 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:33:26.007250 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:33:26.067205 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:33:26.067693 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:33:26.069666 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:33:26.070153 | orchestrator | 2025-05-03 00:33:26.071086 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-03 00:33:26.071429 | orchestrator | Saturday 03 May 2025 00:33:26 +0000 (0:00:00.285) 0:05:16.599 ********** 2025-05-03 00:33:26.174697 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:33:26.208188 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:33:26.247454 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:33:26.273100 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:33:26.341053 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:33:26.341700 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:33:26.343220 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:33:26.343672 | orchestrator | 2025-05-03 00:33:26.343704 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-03 00:33:26.344279 | orchestrator | Saturday 03 May 2025 00:33:26 +0000 (0:00:00.274) 0:05:16.874 ********** 2025-05-03 00:33:26.426619 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:26.497584 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:33:26.539142 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:33:26.569469 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:33:26.662261 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:33:26.663348 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:33:26.664426 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:33:26.665236 | orchestrator | 2025-05-03 00:33:26.665965 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-03 00:33:26.666700 | orchestrator | Saturday 03 May 2025 00:33:26 +0000 (0:00:00.319) 0:05:17.193 ********** 2025-05-03 00:33:26.759433 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:33:26.793860 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:33:26.828175 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:33:26.880304 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:33:26.957273 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:33:26.957797 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:33:26.958550 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:33:26.959029 | orchestrator | 2025-05-03 00:33:26.960369 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-03 00:33:27.055453 | orchestrator | Saturday 03 May 2025 00:33:26 +0000 (0:00:00.295) 0:05:17.489 ********** 2025-05-03 00:33:27.055585 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:33:27.090181 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:33:27.121020 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:33:27.151061 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:33:27.201730 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:33:27.202377 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:33:27.203075 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:33:27.204271 | orchestrator | 2025-05-03 00:33:27.205402 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-03 00:33:27.205915 | orchestrator | Saturday 03 May 2025 00:33:27 +0000 (0:00:00.245) 0:05:17.735 ********** 2025-05-03 00:33:27.697366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:33:27.697811 | orchestrator | 2025-05-03 00:33:27.698400 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-03 00:33:27.699240 | orchestrator | Saturday 03 May 2025 00:33:27 +0000 (0:00:00.492) 0:05:18.227 ********** 2025-05-03 00:33:28.509957 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:28.510197 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:33:28.510418 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:33:28.511483 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:33:28.513060 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:33:28.513150 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:33:28.513184 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:33:28.513865 | orchestrator | 2025-05-03 00:33:28.514358 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-03 00:33:28.515117 | orchestrator | Saturday 03 May 2025 00:33:28 +0000 (0:00:00.814) 0:05:19.042 ********** 2025-05-03 00:33:31.292382 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:33:31.292872 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:33:31.293612 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:33:31.294697 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:33:31.295342 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:33:31.296396 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:33:31.297123 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:31.297737 | orchestrator | 2025-05-03 00:33:31.298366 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-03 00:33:31.298747 | orchestrator | Saturday 03 May 2025 00:33:31 +0000 (0:00:02.780) 0:05:21.823 ********** 2025-05-03 00:33:31.368995 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-03 00:33:31.369602 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-03 00:33:31.467076 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-03 00:33:31.467238 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-03 00:33:31.551172 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-03 00:33:31.551279 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-03 00:33:31.551309 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:33:31.551870 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-03 00:33:31.552554 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-03 00:33:31.553267 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-03 00:33:31.640368 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:33:31.641035 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-03 00:33:31.642359 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-03 00:33:31.643137 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-03 00:33:31.723108 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:33:31.723446 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-03 00:33:31.723969 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-03 00:33:31.724661 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-03 00:33:31.801409 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:33:31.801674 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-03 00:33:31.802701 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-03 00:33:31.803288 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-03 00:33:31.940171 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:33:31.941535 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:33:31.941763 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-03 00:33:31.944196 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-03 00:33:31.945181 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-03 00:33:31.945915 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:33:31.946866 | orchestrator | 2025-05-03 00:33:31.947976 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-03 00:33:31.948565 | orchestrator | Saturday 03 May 2025 00:33:31 +0000 (0:00:00.645) 0:05:22.468 ********** 2025-05-03 00:33:38.173214 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:38.173773 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:38.173822 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:38.174711 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:38.177108 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:38.177543 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:38.178595 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:38.179501 | orchestrator | 2025-05-03 00:33:38.180118 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-03 00:33:38.180782 | orchestrator | Saturday 03 May 2025 00:33:38 +0000 (0:00:06.235) 0:05:28.703 ********** 2025-05-03 00:33:39.246925 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:39.247638 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:39.247698 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:39.248454 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:39.249120 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:39.249794 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:39.250575 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:39.251275 | orchestrator | 2025-05-03 00:33:39.252117 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-03 00:33:39.252477 | orchestrator | Saturday 03 May 2025 00:33:39 +0000 (0:00:01.072) 0:05:29.776 ********** 2025-05-03 00:33:46.132695 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:46.133069 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:46.133127 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:46.136741 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:46.138164 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:46.138202 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:46.138218 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:46.138248 | orchestrator | 2025-05-03 00:33:46.139138 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-03 00:33:46.139663 | orchestrator | Saturday 03 May 2025 00:33:46 +0000 (0:00:06.885) 0:05:36.661 ********** 2025-05-03 00:33:49.203724 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:49.205993 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:49.207403 | orchestrator | changed: [testbed-manager] 2025-05-03 00:33:49.207483 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:49.208495 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:49.210767 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:49.211691 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:49.212466 | orchestrator | 2025-05-03 00:33:49.212534 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-03 00:33:49.213072 | orchestrator | Saturday 03 May 2025 00:33:49 +0000 (0:00:03.071) 0:05:39.733 ********** 2025-05-03 00:33:50.518851 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:50.519229 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:50.519267 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:50.521173 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:50.521407 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:50.522368 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:50.523118 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:50.524019 | orchestrator | 2025-05-03 00:33:50.524674 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-03 00:33:50.525038 | orchestrator | Saturday 03 May 2025 00:33:50 +0000 (0:00:01.316) 0:05:41.049 ********** 2025-05-03 00:33:52.046604 | orchestrator | ok: [testbed-manager] 2025-05-03 00:33:52.048862 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:33:52.051200 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:33:52.051279 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:33:52.052253 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:33:52.052685 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:33:52.054124 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:33:52.054719 | orchestrator | 2025-05-03 00:33:52.055633 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-03 00:33:52.056285 | orchestrator | Saturday 03 May 2025 00:33:52 +0000 (0:00:01.527) 0:05:42.577 ********** 2025-05-03 00:33:52.261999 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:33:52.355369 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:33:52.434244 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:33:52.513492 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:33:52.666980 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:33:52.667549 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:33:52.667594 | orchestrator | changed: [testbed-manager] 2025-05-03 00:33:52.668171 | orchestrator | 2025-05-03 00:33:52.669295 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-03 00:33:52.669567 | orchestrator | Saturday 03 May 2025 00:33:52 +0000 (0:00:00.619) 0:05:43.197 ********** 2025-05-03 00:34:01.953413 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:01.953756 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:01.954135 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:01.954815 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:01.956195 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:01.956712 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:01.957229 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:01.958202 | orchestrator | 2025-05-03 00:34:01.958431 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-03 00:34:01.958932 | orchestrator | Saturday 03 May 2025 00:34:01 +0000 (0:00:09.266) 0:05:52.464 ********** 2025-05-03 00:34:02.840476 | orchestrator | changed: [testbed-manager] 2025-05-03 00:34:02.840703 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:02.844997 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:02.846078 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:02.847516 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:02.847987 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:02.848369 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:02.848858 | orchestrator | 2025-05-03 00:34:02.849413 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-03 00:34:02.849566 | orchestrator | Saturday 03 May 2025 00:34:02 +0000 (0:00:00.909) 0:05:53.373 ********** 2025-05-03 00:34:14.852541 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:14.852771 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:14.852798 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:14.852813 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:14.852828 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:14.852848 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:14.853648 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:14.854109 | orchestrator | 2025-05-03 00:34:14.855011 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-03 00:34:14.855743 | orchestrator | Saturday 03 May 2025 00:34:14 +0000 (0:00:12.002) 0:06:05.375 ********** 2025-05-03 00:34:27.185587 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:27.186202 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:27.186254 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:27.186313 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:27.186334 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:27.186360 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:27.186632 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:27.186663 | orchestrator | 2025-05-03 00:34:27.187065 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-03 00:34:27.187606 | orchestrator | Saturday 03 May 2025 00:34:27 +0000 (0:00:12.336) 0:06:17.712 ********** 2025-05-03 00:34:27.551585 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-03 00:34:28.391070 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-03 00:34:28.391580 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-03 00:34:28.392033 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-03 00:34:28.394792 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-03 00:34:28.395778 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-03 00:34:28.396755 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-03 00:34:28.397272 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-03 00:34:28.397778 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-03 00:34:28.398397 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-03 00:34:28.398834 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-03 00:34:28.400173 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-03 00:34:28.400241 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-03 00:34:28.400868 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-03 00:34:28.402522 | orchestrator | 2025-05-03 00:34:28.402851 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-03 00:34:28.517365 | orchestrator | Saturday 03 May 2025 00:34:28 +0000 (0:00:01.210) 0:06:18.923 ********** 2025-05-03 00:34:28.517532 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:34:28.588465 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:34:28.654579 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:34:28.726982 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:34:28.794154 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:34:28.906681 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:34:28.907044 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:34:28.907631 | orchestrator | 2025-05-03 00:34:28.907668 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-03 00:34:28.908731 | orchestrator | Saturday 03 May 2025 00:34:28 +0000 (0:00:00.515) 0:06:19.438 ********** 2025-05-03 00:34:32.476122 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:32.476921 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:32.477355 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:32.478864 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:32.479661 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:32.480233 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:32.480697 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:32.481079 | orchestrator | 2025-05-03 00:34:32.481742 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-03 00:34:32.482166 | orchestrator | Saturday 03 May 2025 00:34:32 +0000 (0:00:03.568) 0:06:23.006 ********** 2025-05-03 00:34:32.617501 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:34:32.678991 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:34:32.912953 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:34:32.976487 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:34:33.037358 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:34:33.140395 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:34:33.140705 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:34:33.140768 | orchestrator | 2025-05-03 00:34:33.140859 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-03 00:34:33.141973 | orchestrator | Saturday 03 May 2025 00:34:33 +0000 (0:00:00.665) 0:06:23.672 ********** 2025-05-03 00:34:33.210480 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-03 00:34:33.211825 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-03 00:34:33.273726 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:34:33.274148 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-03 00:34:33.274360 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-03 00:34:33.356073 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:34:33.356542 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-03 00:34:33.357419 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-03 00:34:33.426838 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:34:33.430204 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-03 00:34:33.430637 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-03 00:34:33.496128 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:34:33.496335 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-03 00:34:33.496388 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-03 00:34:33.568547 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:34:33.569550 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-03 00:34:33.573241 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-03 00:34:33.681126 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:34:33.681828 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-03 00:34:33.683099 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-03 00:34:33.684265 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:34:33.686182 | orchestrator | 2025-05-03 00:34:33.686810 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-03 00:34:33.686841 | orchestrator | Saturday 03 May 2025 00:34:33 +0000 (0:00:00.539) 0:06:24.212 ********** 2025-05-03 00:34:33.819702 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:34:33.882543 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:34:33.943077 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:34:34.015351 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:34:34.090773 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:34:34.190380 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:34:34.190968 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:34:34.192251 | orchestrator | 2025-05-03 00:34:34.193458 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-03 00:34:34.196071 | orchestrator | Saturday 03 May 2025 00:34:34 +0000 (0:00:00.509) 0:06:24.721 ********** 2025-05-03 00:34:34.333987 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:34:34.394949 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:34:34.461949 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:34:34.526513 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:34:34.589694 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:34:34.684459 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:34:34.685671 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:34:34.686860 | orchestrator | 2025-05-03 00:34:34.690346 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-03 00:34:34.813989 | orchestrator | Saturday 03 May 2025 00:34:34 +0000 (0:00:00.495) 0:06:25.217 ********** 2025-05-03 00:34:34.814214 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:34:34.883844 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:34:34.947351 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:34:35.007596 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:34:35.082424 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:34:35.195736 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:34:35.198107 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:34:35.198147 | orchestrator | 2025-05-03 00:34:35.198173 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-03 00:34:41.079068 | orchestrator | Saturday 03 May 2025 00:34:35 +0000 (0:00:00.507) 0:06:25.724 ********** 2025-05-03 00:34:41.079222 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:41.081494 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:41.081532 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:41.082406 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:41.082439 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:41.083161 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:41.085105 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:41.085628 | orchestrator | 2025-05-03 00:34:41.086381 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-03 00:34:41.087406 | orchestrator | Saturday 03 May 2025 00:34:41 +0000 (0:00:05.882) 0:06:31.607 ********** 2025-05-03 00:34:41.945217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:34:41.945944 | orchestrator | 2025-05-03 00:34:41.946375 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-03 00:34:41.947399 | orchestrator | Saturday 03 May 2025 00:34:41 +0000 (0:00:00.870) 0:06:32.478 ********** 2025-05-03 00:34:42.355627 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:42.802560 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:42.803395 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:42.804017 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:42.805491 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:42.806331 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:42.807060 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:42.808212 | orchestrator | 2025-05-03 00:34:42.808905 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-03 00:34:42.809684 | orchestrator | Saturday 03 May 2025 00:34:42 +0000 (0:00:00.855) 0:06:33.333 ********** 2025-05-03 00:34:43.813022 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:43.813822 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:43.814954 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:43.815566 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:43.819293 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:43.819775 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:43.820337 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:43.820945 | orchestrator | 2025-05-03 00:34:43.821473 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-03 00:34:43.822008 | orchestrator | Saturday 03 May 2025 00:34:43 +0000 (0:00:01.010) 0:06:34.344 ********** 2025-05-03 00:34:45.137846 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:45.138173 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:45.138580 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:45.140325 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:45.141080 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:45.143287 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:45.144063 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:45.145111 | orchestrator | 2025-05-03 00:34:45.146608 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-03 00:34:45.147302 | orchestrator | Saturday 03 May 2025 00:34:45 +0000 (0:00:01.323) 0:06:35.668 ********** 2025-05-03 00:34:45.267651 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:34:46.552852 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:34:46.553137 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:34:46.553170 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:34:46.555972 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:34:46.556109 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:34:46.556505 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:34:46.556818 | orchestrator | 2025-05-03 00:34:46.557247 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-03 00:34:46.557575 | orchestrator | Saturday 03 May 2025 00:34:46 +0000 (0:00:01.412) 0:06:37.080 ********** 2025-05-03 00:34:47.841078 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:47.842417 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:47.843207 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:47.843781 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:47.844567 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:47.845061 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:47.846180 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:47.846514 | orchestrator | 2025-05-03 00:34:47.847152 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-03 00:34:47.847740 | orchestrator | Saturday 03 May 2025 00:34:47 +0000 (0:00:01.292) 0:06:38.373 ********** 2025-05-03 00:34:49.217139 | orchestrator | changed: [testbed-manager] 2025-05-03 00:34:49.217766 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:49.218399 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:49.219287 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:49.220223 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:49.220932 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:49.222534 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:49.223289 | orchestrator | 2025-05-03 00:34:49.224570 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-03 00:34:49.225224 | orchestrator | Saturday 03 May 2025 00:34:49 +0000 (0:00:01.374) 0:06:39.747 ********** 2025-05-03 00:34:50.241804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:34:50.242278 | orchestrator | 2025-05-03 00:34:50.242498 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-03 00:34:50.243331 | orchestrator | Saturday 03 May 2025 00:34:50 +0000 (0:00:01.025) 0:06:40.773 ********** 2025-05-03 00:34:51.589962 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:34:51.590236 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:51.590272 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:34:51.590502 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:34:51.591380 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:34:51.591623 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:34:51.593343 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:34:51.593757 | orchestrator | 2025-05-03 00:34:51.594677 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-03 00:34:51.595316 | orchestrator | Saturday 03 May 2025 00:34:51 +0000 (0:00:01.344) 0:06:42.117 ********** 2025-05-03 00:34:52.711459 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:52.712179 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:34:52.712329 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:34:52.713106 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:34:52.713637 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:34:52.715893 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:34:53.916015 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:34:53.916145 | orchestrator | 2025-05-03 00:34:53.916168 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-03 00:34:53.916185 | orchestrator | Saturday 03 May 2025 00:34:52 +0000 (0:00:01.125) 0:06:43.242 ********** 2025-05-03 00:34:53.916217 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:53.918344 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:34:53.918481 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:34:53.918517 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:34:53.918601 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:34:53.918741 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:34:53.918773 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:34:53.919390 | orchestrator | 2025-05-03 00:34:53.920083 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-03 00:34:53.920443 | orchestrator | Saturday 03 May 2025 00:34:53 +0000 (0:00:01.202) 0:06:44.445 ********** 2025-05-03 00:34:55.354157 | orchestrator | ok: [testbed-manager] 2025-05-03 00:34:55.354542 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:34:55.355761 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:34:55.357279 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:34:55.357379 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:34:55.358569 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:34:55.358960 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:34:55.359383 | orchestrator | 2025-05-03 00:34:55.359821 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-03 00:34:55.360290 | orchestrator | Saturday 03 May 2025 00:34:55 +0000 (0:00:01.440) 0:06:45.885 ********** 2025-05-03 00:34:56.497858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:34:56.498625 | orchestrator | 2025-05-03 00:34:56.502265 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-03 00:34:56.502715 | orchestrator | Saturday 03 May 2025 00:34:56 +0000 (0:00:00.864) 0:06:46.750 ********** 2025-05-03 00:34:56.502743 | orchestrator | 2025-05-03 00:34:56.502765 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-03 00:34:56.503150 | orchestrator | Saturday 03 May 2025 00:34:56 +0000 (0:00:00.037) 0:06:46.787 ********** 2025-05-03 00:34:56.504157 | orchestrator | 2025-05-03 00:34:56.504616 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-03 00:34:56.505466 | orchestrator | Saturday 03 May 2025 00:34:56 +0000 (0:00:00.035) 0:06:46.823 ********** 2025-05-03 00:34:56.506323 | orchestrator | 2025-05-03 00:34:56.507108 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-03 00:34:56.507968 | orchestrator | Saturday 03 May 2025 00:34:56 +0000 (0:00:00.042) 0:06:46.866 ********** 2025-05-03 00:34:56.508318 | orchestrator | 2025-05-03 00:34:56.508930 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-03 00:34:56.509733 | orchestrator | Saturday 03 May 2025 00:34:56 +0000 (0:00:00.039) 0:06:46.905 ********** 2025-05-03 00:34:56.509981 | orchestrator | 2025-05-03 00:34:56.510371 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-03 00:34:56.510849 | orchestrator | Saturday 03 May 2025 00:34:56 +0000 (0:00:00.037) 0:06:46.943 ********** 2025-05-03 00:34:56.511280 | orchestrator | 2025-05-03 00:34:56.511540 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-03 00:34:56.512044 | orchestrator | Saturday 03 May 2025 00:34:56 +0000 (0:00:00.045) 0:06:46.988 ********** 2025-05-03 00:34:56.512352 | orchestrator | 2025-05-03 00:34:56.512808 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-03 00:34:56.513121 | orchestrator | Saturday 03 May 2025 00:34:56 +0000 (0:00:00.038) 0:06:47.027 ********** 2025-05-03 00:34:57.584119 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:34:57.585298 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:34:57.588388 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:34:57.589387 | orchestrator | 2025-05-03 00:34:57.589996 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-03 00:34:57.591059 | orchestrator | Saturday 03 May 2025 00:34:57 +0000 (0:00:01.086) 0:06:48.114 ********** 2025-05-03 00:34:59.113150 | orchestrator | changed: [testbed-manager] 2025-05-03 00:34:59.114812 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:34:59.116457 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:34:59.117605 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:34:59.118222 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:34:59.120321 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:34:59.120514 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:34:59.122522 | orchestrator | 2025-05-03 00:34:59.123529 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-03 00:34:59.124552 | orchestrator | Saturday 03 May 2025 00:34:59 +0000 (0:00:01.527) 0:06:49.641 ********** 2025-05-03 00:35:00.269624 | orchestrator | changed: [testbed-manager] 2025-05-03 00:35:00.269951 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:00.269994 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:00.272016 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:00.272093 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:00.273061 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:00.274354 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:00.274964 | orchestrator | 2025-05-03 00:35:00.275822 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-03 00:35:00.276710 | orchestrator | Saturday 03 May 2025 00:35:00 +0000 (0:00:01.157) 0:06:50.799 ********** 2025-05-03 00:35:00.390987 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:02.521768 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:02.522608 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:02.523220 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:02.523826 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:02.525014 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:02.525265 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:02.526325 | orchestrator | 2025-05-03 00:35:02.526522 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-03 00:35:02.527049 | orchestrator | Saturday 03 May 2025 00:35:02 +0000 (0:00:02.251) 0:06:53.050 ********** 2025-05-03 00:35:02.632766 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:02.633525 | orchestrator | 2025-05-03 00:35:02.634597 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-03 00:35:02.634941 | orchestrator | Saturday 03 May 2025 00:35:02 +0000 (0:00:00.111) 0:06:53.162 ********** 2025-05-03 00:35:03.646093 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:03.646275 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:03.646301 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:03.646317 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:03.646338 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:03.646584 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:03.647486 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:03.648095 | orchestrator | 2025-05-03 00:35:03.648128 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-03 00:35:03.648579 | orchestrator | Saturday 03 May 2025 00:35:03 +0000 (0:00:01.013) 0:06:54.175 ********** 2025-05-03 00:35:03.781629 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:03.847850 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:03.912667 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:03.983446 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:04.262373 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:04.393295 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:04.393783 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:04.393829 | orchestrator | 2025-05-03 00:35:04.394460 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-03 00:35:04.395057 | orchestrator | Saturday 03 May 2025 00:35:04 +0000 (0:00:00.749) 0:06:54.924 ********** 2025-05-03 00:35:05.277198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:35:05.278134 | orchestrator | 2025-05-03 00:35:05.278181 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-03 00:35:05.281096 | orchestrator | Saturday 03 May 2025 00:35:05 +0000 (0:00:00.882) 0:06:55.806 ********** 2025-05-03 00:35:05.701083 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:06.121582 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:06.122000 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:06.122384 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:06.123285 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:06.123637 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:06.124308 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:06.124801 | orchestrator | 2025-05-03 00:35:06.125384 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-03 00:35:06.125988 | orchestrator | Saturday 03 May 2025 00:35:06 +0000 (0:00:00.845) 0:06:56.652 ********** 2025-05-03 00:35:08.792822 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-03 00:35:08.793700 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-03 00:35:08.793756 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-03 00:35:08.795300 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-03 00:35:08.796431 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-03 00:35:08.797860 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-03 00:35:08.798781 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-03 00:35:08.799663 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-03 00:35:08.800429 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-03 00:35:08.801242 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-03 00:35:08.801911 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-03 00:35:08.802552 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-03 00:35:08.803483 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-03 00:35:08.805057 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-03 00:35:08.805257 | orchestrator | 2025-05-03 00:35:08.805283 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-03 00:35:08.805305 | orchestrator | Saturday 03 May 2025 00:35:08 +0000 (0:00:02.670) 0:06:59.322 ********** 2025-05-03 00:35:08.934581 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:09.006417 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:09.074052 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:09.146363 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:09.212501 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:09.318268 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:09.318428 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:09.319322 | orchestrator | 2025-05-03 00:35:09.319954 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-03 00:35:09.320966 | orchestrator | Saturday 03 May 2025 00:35:09 +0000 (0:00:00.526) 0:06:59.849 ********** 2025-05-03 00:35:10.134711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:35:10.135336 | orchestrator | 2025-05-03 00:35:10.136082 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-03 00:35:10.136988 | orchestrator | Saturday 03 May 2025 00:35:10 +0000 (0:00:00.817) 0:07:00.666 ********** 2025-05-03 00:35:10.565371 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:10.974837 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:10.975526 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:10.976463 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:10.977413 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:10.978279 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:10.978810 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:10.979376 | orchestrator | 2025-05-03 00:35:10.980376 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-03 00:35:10.980935 | orchestrator | Saturday 03 May 2025 00:35:10 +0000 (0:00:00.839) 0:07:01.505 ********** 2025-05-03 00:35:11.407715 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:12.023122 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:12.024561 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:12.025505 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:12.026588 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:12.027165 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:12.028283 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:12.028586 | orchestrator | 2025-05-03 00:35:12.030379 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-03 00:35:12.030813 | orchestrator | Saturday 03 May 2025 00:35:12 +0000 (0:00:01.049) 0:07:02.555 ********** 2025-05-03 00:35:12.156938 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:12.225975 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:12.290680 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:12.354916 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:12.426280 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:12.533205 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:12.533395 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:12.535246 | orchestrator | 2025-05-03 00:35:12.535857 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-03 00:35:12.537593 | orchestrator | Saturday 03 May 2025 00:35:12 +0000 (0:00:00.506) 0:07:03.062 ********** 2025-05-03 00:35:13.932365 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:13.933404 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:13.934993 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:13.936496 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:13.937017 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:13.938263 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:13.939008 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:13.939726 | orchestrator | 2025-05-03 00:35:13.940982 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-03 00:35:13.941966 | orchestrator | Saturday 03 May 2025 00:35:13 +0000 (0:00:01.401) 0:07:04.463 ********** 2025-05-03 00:35:14.077132 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:14.155952 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:14.225534 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:14.288947 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:14.352410 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:14.454796 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:14.456464 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:14.456911 | orchestrator | 2025-05-03 00:35:14.458628 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-03 00:35:14.459846 | orchestrator | Saturday 03 May 2025 00:35:14 +0000 (0:00:00.524) 0:07:04.987 ********** 2025-05-03 00:35:16.637780 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:16.638224 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:16.639926 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:16.640733 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:16.641748 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:16.642665 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:16.643312 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:16.644075 | orchestrator | 2025-05-03 00:35:16.645792 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-03 00:35:17.932579 | orchestrator | Saturday 03 May 2025 00:35:16 +0000 (0:00:02.180) 0:07:07.167 ********** 2025-05-03 00:35:17.932748 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:17.932832 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:17.934725 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:17.936188 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:17.936515 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:17.937927 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:17.938896 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:17.939325 | orchestrator | 2025-05-03 00:35:17.940318 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-03 00:35:17.940947 | orchestrator | Saturday 03 May 2025 00:35:17 +0000 (0:00:01.289) 0:07:08.457 ********** 2025-05-03 00:35:19.631721 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:19.635220 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:19.636139 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:19.636556 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:19.637936 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:19.639068 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:19.639751 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:19.642285 | orchestrator | 2025-05-03 00:35:19.642437 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-03 00:35:19.643505 | orchestrator | Saturday 03 May 2025 00:35:19 +0000 (0:00:01.703) 0:07:10.161 ********** 2025-05-03 00:35:21.308671 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:21.311315 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:21.312102 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:21.312143 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:21.313431 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:21.314152 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:21.315069 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:21.315472 | orchestrator | 2025-05-03 00:35:21.316427 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-03 00:35:21.316736 | orchestrator | Saturday 03 May 2025 00:35:21 +0000 (0:00:01.675) 0:07:11.837 ********** 2025-05-03 00:35:21.956648 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:22.420333 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:22.421427 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:22.421477 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:22.421814 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:22.422993 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:22.423944 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:22.424941 | orchestrator | 2025-05-03 00:35:22.425351 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-03 00:35:22.427608 | orchestrator | Saturday 03 May 2025 00:35:22 +0000 (0:00:01.110) 0:07:12.947 ********** 2025-05-03 00:35:22.571286 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:22.645181 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:22.722538 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:22.787582 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:22.854289 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:23.283257 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:23.285190 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:23.285239 | orchestrator | 2025-05-03 00:35:23.286511 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-03 00:35:23.286935 | orchestrator | Saturday 03 May 2025 00:35:23 +0000 (0:00:00.867) 0:07:13.814 ********** 2025-05-03 00:35:23.427213 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:23.500618 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:23.568249 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:23.640341 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:23.705611 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:23.803692 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:23.804967 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:23.805981 | orchestrator | 2025-05-03 00:35:23.806265 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-03 00:35:23.807621 | orchestrator | Saturday 03 May 2025 00:35:23 +0000 (0:00:00.520) 0:07:14.335 ********** 2025-05-03 00:35:23.945341 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:24.013754 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:24.094972 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:24.186409 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:24.256155 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:24.355363 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:24.356340 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:24.357225 | orchestrator | 2025-05-03 00:35:24.358206 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-03 00:35:24.361066 | orchestrator | Saturday 03 May 2025 00:35:24 +0000 (0:00:00.553) 0:07:14.888 ********** 2025-05-03 00:35:24.489067 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:24.758249 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:24.825437 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:24.893528 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:24.967607 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:25.077067 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:25.077412 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:25.077926 | orchestrator | 2025-05-03 00:35:25.078846 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-03 00:35:25.079339 | orchestrator | Saturday 03 May 2025 00:35:25 +0000 (0:00:00.718) 0:07:15.607 ********** 2025-05-03 00:35:25.225842 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:25.304367 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:25.371310 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:25.443006 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:25.518339 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:25.643103 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:25.644028 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:25.644075 | orchestrator | 2025-05-03 00:35:25.644918 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-03 00:35:25.645821 | orchestrator | Saturday 03 May 2025 00:35:25 +0000 (0:00:00.563) 0:07:16.170 ********** 2025-05-03 00:35:31.479207 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:31.479816 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:31.479895 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:31.481261 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:31.481759 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:31.482561 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:31.483064 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:31.483652 | orchestrator | 2025-05-03 00:35:31.484408 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-03 00:35:31.485551 | orchestrator | Saturday 03 May 2025 00:35:31 +0000 (0:00:05.840) 0:07:22.011 ********** 2025-05-03 00:35:31.697260 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:31.764087 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:31.829704 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:31.899694 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:32.016755 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:32.017482 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:32.018823 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:32.020931 | orchestrator | 2025-05-03 00:35:33.000502 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-03 00:35:33.000660 | orchestrator | Saturday 03 May 2025 00:35:32 +0000 (0:00:00.535) 0:07:22.546 ********** 2025-05-03 00:35:33.000700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:35:33.000776 | orchestrator | 2025-05-03 00:35:33.002180 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-03 00:35:33.006117 | orchestrator | Saturday 03 May 2025 00:35:32 +0000 (0:00:00.983) 0:07:23.530 ********** 2025-05-03 00:35:34.878794 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:34.879039 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:34.880736 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:34.881816 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:34.883623 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:34.883932 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:34.884927 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:34.885261 | orchestrator | 2025-05-03 00:35:34.886344 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-03 00:35:34.886983 | orchestrator | Saturday 03 May 2025 00:35:34 +0000 (0:00:01.877) 0:07:25.408 ********** 2025-05-03 00:35:35.986489 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:35.986974 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:35.987025 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:35.987612 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:35.988107 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:35.989035 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:35.992366 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:35.992786 | orchestrator | 2025-05-03 00:35:35.993393 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-03 00:35:35.993970 | orchestrator | Saturday 03 May 2025 00:35:35 +0000 (0:00:01.108) 0:07:26.516 ********** 2025-05-03 00:35:36.415365 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:36.840302 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:36.840480 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:36.840996 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:36.841991 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:36.842536 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:36.844281 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:36.844596 | orchestrator | 2025-05-03 00:35:36.844624 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-03 00:35:36.844646 | orchestrator | Saturday 03 May 2025 00:35:36 +0000 (0:00:00.855) 0:07:27.372 ********** 2025-05-03 00:35:38.794984 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-03 00:35:38.795747 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-03 00:35:38.795802 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-03 00:35:38.796529 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-03 00:35:38.800956 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-03 00:35:38.801529 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-03 00:35:38.802977 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-03 00:35:38.803404 | orchestrator | 2025-05-03 00:35:38.803920 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-03 00:35:38.804654 | orchestrator | Saturday 03 May 2025 00:35:38 +0000 (0:00:01.952) 0:07:29.324 ********** 2025-05-03 00:35:39.602687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:35:39.603062 | orchestrator | 2025-05-03 00:35:39.604178 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-03 00:35:39.604448 | orchestrator | Saturday 03 May 2025 00:35:39 +0000 (0:00:00.811) 0:07:30.136 ********** 2025-05-03 00:35:48.273351 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:48.273835 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:48.275333 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:48.276194 | orchestrator | changed: [testbed-manager] 2025-05-03 00:35:48.277281 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:48.278884 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:48.279182 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:48.279669 | orchestrator | 2025-05-03 00:35:48.280227 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-03 00:35:48.280636 | orchestrator | Saturday 03 May 2025 00:35:48 +0000 (0:00:08.667) 0:07:38.803 ********** 2025-05-03 00:35:50.170415 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:50.170789 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:50.170837 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:50.172155 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:50.172985 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:50.174412 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:50.176842 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:50.177493 | orchestrator | 2025-05-03 00:35:50.179891 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-03 00:35:50.180493 | orchestrator | Saturday 03 May 2025 00:35:50 +0000 (0:00:01.895) 0:07:40.698 ********** 2025-05-03 00:35:51.459784 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:51.460101 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:51.460939 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:51.461192 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:51.463716 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:51.465512 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:51.466203 | orchestrator | 2025-05-03 00:35:51.467138 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-03 00:35:51.467709 | orchestrator | Saturday 03 May 2025 00:35:51 +0000 (0:00:01.292) 0:07:41.991 ********** 2025-05-03 00:35:53.874752 | orchestrator | changed: [testbed-manager] 2025-05-03 00:35:53.875145 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:53.875192 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:53.876328 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:53.880107 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:53.880729 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:53.880914 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:53.880939 | orchestrator | 2025-05-03 00:35:53.880957 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-03 00:35:53.880973 | orchestrator | 2025-05-03 00:35:53.881003 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-03 00:35:53.881080 | orchestrator | Saturday 03 May 2025 00:35:53 +0000 (0:00:02.416) 0:07:44.407 ********** 2025-05-03 00:35:53.999904 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:54.065471 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:54.127322 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:54.196452 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:54.268167 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:54.397339 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:54.397961 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:54.398305 | orchestrator | 2025-05-03 00:35:54.398767 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-03 00:35:54.399562 | orchestrator | 2025-05-03 00:35:54.401663 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-03 00:35:54.403302 | orchestrator | Saturday 03 May 2025 00:35:54 +0000 (0:00:00.520) 0:07:44.928 ********** 2025-05-03 00:35:55.703588 | orchestrator | changed: [testbed-manager] 2025-05-03 00:35:55.708152 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:55.708468 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:55.708500 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:55.708515 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:55.708561 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:55.709058 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:55.710117 | orchestrator | 2025-05-03 00:35:55.710594 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-03 00:35:55.711046 | orchestrator | Saturday 03 May 2025 00:35:55 +0000 (0:00:01.306) 0:07:46.234 ********** 2025-05-03 00:35:57.079405 | orchestrator | ok: [testbed-manager] 2025-05-03 00:35:57.084281 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:35:57.084385 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:35:57.084961 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:35:57.092244 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:35:57.095099 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:35:57.095142 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:35:57.096126 | orchestrator | 2025-05-03 00:35:57.096997 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-03 00:35:57.097623 | orchestrator | Saturday 03 May 2025 00:35:57 +0000 (0:00:01.372) 0:07:47.607 ********** 2025-05-03 00:35:57.199932 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:35:57.286821 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:35:57.564259 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:35:57.628780 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:35:57.692571 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:35:58.095237 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:35:58.098259 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:35:58.101647 | orchestrator | 2025-05-03 00:35:59.296712 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-03 00:35:59.296939 | orchestrator | Saturday 03 May 2025 00:35:58 +0000 (0:00:01.019) 0:07:48.626 ********** 2025-05-03 00:35:59.296983 | orchestrator | changed: [testbed-manager] 2025-05-03 00:35:59.297064 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:35:59.297689 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:35:59.298534 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:35:59.299124 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:35:59.299897 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:35:59.300369 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:35:59.301232 | orchestrator | 2025-05-03 00:35:59.302126 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-03 00:35:59.302930 | orchestrator | 2025-05-03 00:35:59.304296 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-03 00:35:59.305309 | orchestrator | Saturday 03 May 2025 00:35:59 +0000 (0:00:01.202) 0:07:49.828 ********** 2025-05-03 00:36:00.108609 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:36:00.111016 | orchestrator | 2025-05-03 00:36:00.111146 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-03 00:36:00.112020 | orchestrator | Saturday 03 May 2025 00:36:00 +0000 (0:00:00.811) 0:07:50.639 ********** 2025-05-03 00:36:00.521682 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:01.156832 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:01.157736 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:01.158318 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:01.158915 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:01.160295 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:01.162084 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:01.162782 | orchestrator | 2025-05-03 00:36:01.163989 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-03 00:36:01.164743 | orchestrator | Saturday 03 May 2025 00:36:01 +0000 (0:00:01.049) 0:07:51.689 ********** 2025-05-03 00:36:02.326185 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:36:02.326571 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:02.326771 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:36:02.331602 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:36:02.332253 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:36:02.333158 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:36:02.333540 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:36:02.334192 | orchestrator | 2025-05-03 00:36:02.334732 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-03 00:36:02.335292 | orchestrator | Saturday 03 May 2025 00:36:02 +0000 (0:00:01.165) 0:07:52.854 ********** 2025-05-03 00:36:03.314742 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:36:03.315965 | orchestrator | 2025-05-03 00:36:03.316019 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-03 00:36:03.317207 | orchestrator | Saturday 03 May 2025 00:36:03 +0000 (0:00:00.989) 0:07:53.844 ********** 2025-05-03 00:36:03.768026 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:04.180800 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:04.181806 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:04.182615 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:04.184155 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:04.185444 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:04.185849 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:04.187000 | orchestrator | 2025-05-03 00:36:04.187737 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-03 00:36:04.188280 | orchestrator | Saturday 03 May 2025 00:36:04 +0000 (0:00:00.867) 0:07:54.711 ********** 2025-05-03 00:36:04.637432 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:05.376403 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:36:05.376735 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:36:05.377122 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:36:05.377830 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:36:05.380600 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:36:05.380974 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:36:05.382359 | orchestrator | 2025-05-03 00:36:05.382636 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:36:05.384209 | orchestrator | 2025-05-03 00:36:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:36:05.386099 | orchestrator | 2025-05-03 00:36:05 | INFO  | Please wait and do not abort execution. 2025-05-03 00:36:05.386144 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-03 00:36:05.386569 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-03 00:36:05.387153 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-03 00:36:05.388059 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-03 00:36:05.388540 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-03 00:36:05.388912 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-03 00:36:05.389609 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-03 00:36:05.390152 | orchestrator | 2025-05-03 00:36:05.390971 | orchestrator | Saturday 03 May 2025 00:36:05 +0000 (0:00:01.195) 0:07:55.906 ********** 2025-05-03 00:36:05.391682 | orchestrator | =============================================================================== 2025-05-03 00:36:05.392256 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.48s 2025-05-03 00:36:05.392287 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.33s 2025-05-03 00:36:05.393151 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.68s 2025-05-03 00:36:05.393413 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.54s 2025-05-03 00:36:05.393754 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.60s 2025-05-03 00:36:05.394435 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.34s 2025-05-03 00:36:05.394879 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.00s 2025-05-03 00:36:05.395210 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.99s 2025-05-03 00:36:05.395551 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.27s 2025-05-03 00:36:05.396021 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.67s 2025-05-03 00:36:05.396468 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 8.19s 2025-05-03 00:36:05.396918 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.14s 2025-05-03 00:36:05.397157 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.59s 2025-05-03 00:36:05.397468 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.09s 2025-05-03 00:36:05.397889 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 6.93s 2025-05-03 00:36:05.398213 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.89s 2025-05-03 00:36:05.398658 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.24s 2025-05-03 00:36:05.400635 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.88s 2025-05-03 00:36:06.184200 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.87s 2025-05-03 00:36:06.184344 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.84s 2025-05-03 00:36:06.184386 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-03 00:36:08.099947 | orchestrator | + osism apply network 2025-05-03 00:36:08.100095 | orchestrator | 2025-05-03 00:36:08 | INFO  | Task 2bd04a43-018f-474b-9b5a-7dfe3fa615f6 (network) was prepared for execution. 2025-05-03 00:36:11.325377 | orchestrator | 2025-05-03 00:36:08 | INFO  | It takes a moment until task 2bd04a43-018f-474b-9b5a-7dfe3fa615f6 (network) has been started and output is visible here. 2025-05-03 00:36:11.325559 | orchestrator | 2025-05-03 00:36:11.325631 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-03 00:36:11.325655 | orchestrator | 2025-05-03 00:36:11.327211 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-03 00:36:11.327681 | orchestrator | Saturday 03 May 2025 00:36:11 +0000 (0:00:00.195) 0:00:00.195 ********** 2025-05-03 00:36:11.472410 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:11.552413 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:11.634314 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:11.701566 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:11.781449 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:12.027115 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:12.027737 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:12.027779 | orchestrator | 2025-05-03 00:36:12.028380 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-03 00:36:12.028818 | orchestrator | Saturday 03 May 2025 00:36:12 +0000 (0:00:00.703) 0:00:00.899 ********** 2025-05-03 00:36:13.193380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:36:13.194204 | orchestrator | 2025-05-03 00:36:13.194518 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-03 00:36:13.197361 | orchestrator | Saturday 03 May 2025 00:36:13 +0000 (0:00:01.163) 0:00:02.063 ********** 2025-05-03 00:36:15.049391 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:15.051009 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:15.051083 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:15.053419 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:15.054556 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:15.058182 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:15.059233 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:15.059259 | orchestrator | 2025-05-03 00:36:15.059283 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-03 00:36:16.770267 | orchestrator | Saturday 03 May 2025 00:36:15 +0000 (0:00:01.855) 0:00:03.918 ********** 2025-05-03 00:36:16.770407 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:16.770707 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:16.774987 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:16.776190 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:16.776242 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:16.776268 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:16.776304 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:16.776750 | orchestrator | 2025-05-03 00:36:16.776791 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-03 00:36:16.776825 | orchestrator | Saturday 03 May 2025 00:36:16 +0000 (0:00:01.721) 0:00:05.640 ********** 2025-05-03 00:36:17.277705 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-03 00:36:17.277926 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-03 00:36:17.889219 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-03 00:36:17.890104 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-03 00:36:17.890154 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-03 00:36:17.891520 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-03 00:36:17.892460 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-03 00:36:17.893332 | orchestrator | 2025-05-03 00:36:17.893777 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-03 00:36:17.894571 | orchestrator | Saturday 03 May 2025 00:36:17 +0000 (0:00:01.118) 0:00:06.759 ********** 2025-05-03 00:36:19.570154 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 00:36:19.570891 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-03 00:36:19.570960 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 00:36:19.571796 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-03 00:36:19.572519 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-03 00:36:19.573307 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-03 00:36:19.573590 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-03 00:36:19.574746 | orchestrator | 2025-05-03 00:36:19.575203 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-03 00:36:19.575826 | orchestrator | Saturday 03 May 2025 00:36:19 +0000 (0:00:01.683) 0:00:08.442 ********** 2025-05-03 00:36:21.236902 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:21.238518 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:36:21.239812 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:36:21.240769 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:36:21.241884 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:36:21.243037 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:36:21.243538 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:36:21.244250 | orchestrator | 2025-05-03 00:36:21.244899 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-03 00:36:21.245662 | orchestrator | Saturday 03 May 2025 00:36:21 +0000 (0:00:01.657) 0:00:10.099 ********** 2025-05-03 00:36:21.700277 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 00:36:21.795710 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 00:36:22.252814 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-03 00:36:22.254843 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-03 00:36:22.255282 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-03 00:36:22.256158 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-03 00:36:22.259274 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-03 00:36:22.708035 | orchestrator | 2025-05-03 00:36:22.708176 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-03 00:36:22.708196 | orchestrator | Saturday 03 May 2025 00:36:22 +0000 (0:00:01.028) 0:00:11.128 ********** 2025-05-03 00:36:22.708228 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:22.820823 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:23.427642 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:23.428480 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:23.428546 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:23.428571 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:23.428666 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:23.428688 | orchestrator | 2025-05-03 00:36:23.428711 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-03 00:36:23.430468 | orchestrator | Saturday 03 May 2025 00:36:23 +0000 (0:00:01.169) 0:00:12.297 ********** 2025-05-03 00:36:23.603930 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:36:23.687088 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:36:23.765157 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:36:23.838190 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:36:23.930852 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:36:24.252256 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:36:24.252853 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:36:24.253457 | orchestrator | 2025-05-03 00:36:24.254076 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-03 00:36:24.255163 | orchestrator | Saturday 03 May 2025 00:36:24 +0000 (0:00:00.826) 0:00:13.123 ********** 2025-05-03 00:36:26.246335 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:26.248675 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:26.248802 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:26.249515 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:26.249567 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:26.249584 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:26.249610 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:26.250967 | orchestrator | 2025-05-03 00:36:26.251012 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-03 00:36:26.251524 | orchestrator | Saturday 03 May 2025 00:36:26 +0000 (0:00:01.993) 0:00:15.117 ********** 2025-05-03 00:36:28.111357 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-03 00:36:28.111898 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-03 00:36:28.111957 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-03 00:36:28.111985 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-03 00:36:28.112002 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-03 00:36:28.112025 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-03 00:36:28.112494 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-03 00:36:28.113220 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-03 00:36:28.113695 | orchestrator | 2025-05-03 00:36:28.114458 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-03 00:36:28.115373 | orchestrator | Saturday 03 May 2025 00:36:28 +0000 (0:00:01.852) 0:00:16.970 ********** 2025-05-03 00:36:29.655409 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:29.656695 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:36:29.658606 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:36:29.659764 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:36:29.661597 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:36:29.661767 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:36:29.661798 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:36:29.662569 | orchestrator | 2025-05-03 00:36:29.662900 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-03 00:36:29.663375 | orchestrator | Saturday 03 May 2025 00:36:29 +0000 (0:00:01.557) 0:00:18.527 ********** 2025-05-03 00:36:31.128574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:36:31.129526 | orchestrator | 2025-05-03 00:36:31.131094 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-03 00:36:31.659525 | orchestrator | Saturday 03 May 2025 00:36:31 +0000 (0:00:01.470) 0:00:19.997 ********** 2025-05-03 00:36:31.659677 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:32.082084 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:32.082786 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:32.083693 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:32.085075 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:32.086057 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:32.087389 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:32.088431 | orchestrator | 2025-05-03 00:36:32.089461 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-03 00:36:32.090250 | orchestrator | Saturday 03 May 2025 00:36:32 +0000 (0:00:00.958) 0:00:20.956 ********** 2025-05-03 00:36:32.245296 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:32.328169 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:36:32.589349 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:36:32.675011 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:36:32.759904 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:36:32.898146 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:36:32.898369 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:36:32.898404 | orchestrator | 2025-05-03 00:36:32.898765 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-03 00:36:32.899278 | orchestrator | Saturday 03 May 2025 00:36:32 +0000 (0:00:00.811) 0:00:21.768 ********** 2025-05-03 00:36:33.273013 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-03 00:36:33.273226 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-03 00:36:33.365388 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-03 00:36:33.366397 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-03 00:36:33.462369 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-03 00:36:33.462541 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-03 00:36:33.939825 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-03 00:36:33.940242 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-03 00:36:33.941796 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-03 00:36:33.942632 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-03 00:36:33.945596 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-03 00:36:33.946249 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-03 00:36:33.946907 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-03 00:36:33.947178 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-03 00:36:33.947771 | orchestrator | 2025-05-03 00:36:33.948491 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-03 00:36:33.948998 | orchestrator | Saturday 03 May 2025 00:36:33 +0000 (0:00:01.037) 0:00:22.805 ********** 2025-05-03 00:36:34.285052 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:36:34.374496 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:36:34.463759 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:36:34.553069 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:36:34.636137 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:36:35.902307 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:36:35.908898 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:36:35.909837 | orchestrator | 2025-05-03 00:36:35.909988 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-03 00:36:35.911131 | orchestrator | Saturday 03 May 2025 00:36:35 +0000 (0:00:01.961) 0:00:24.767 ********** 2025-05-03 00:36:36.072724 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:36:36.193405 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:36:36.492955 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:36:36.573160 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:36:36.660778 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:36:36.695476 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:36:36.696195 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:36:36.697217 | orchestrator | 2025-05-03 00:36:36.698327 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:36:36.699320 | orchestrator | 2025-05-03 00:36:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:36:36.699922 | orchestrator | 2025-05-03 00:36:36 | INFO  | Please wait and do not abort execution. 2025-05-03 00:36:36.701517 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:36:36.702416 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:36:36.703131 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:36:36.703853 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:36:36.704940 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:36:36.706458 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:36:36.707463 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:36:36.708272 | orchestrator | 2025-05-03 00:36:36.709421 | orchestrator | Saturday 03 May 2025 00:36:36 +0000 (0:00:00.802) 0:00:25.569 ********** 2025-05-03 00:36:36.709927 | orchestrator | =============================================================================== 2025-05-03 00:36:36.710786 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.99s 2025-05-03 00:36:36.711428 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.96s 2025-05-03 00:36:36.712363 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.86s 2025-05-03 00:36:36.713384 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.85s 2025-05-03 00:36:36.714425 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.72s 2025-05-03 00:36:36.714851 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.68s 2025-05-03 00:36:36.716034 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2025-05-03 00:36:36.716487 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.56s 2025-05-03 00:36:36.717365 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.47s 2025-05-03 00:36:36.717850 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2025-05-03 00:36:36.718365 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2025-05-03 00:36:36.718820 | orchestrator | osism.commons.network : Create required directories --------------------- 1.12s 2025-05-03 00:36:36.720104 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.04s 2025-05-03 00:36:36.720789 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.03s 2025-05-03 00:36:36.721106 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.96s 2025-05-03 00:36:36.721808 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.83s 2025-05-03 00:36:36.722361 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.81s 2025-05-03 00:36:36.722672 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.80s 2025-05-03 00:36:36.724579 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.70s 2025-05-03 00:36:37.231125 | orchestrator | + osism apply wireguard 2025-05-03 00:36:38.993641 | orchestrator | 2025-05-03 00:36:38 | INFO  | Task 14ca0206-5661-42ea-b7bd-e806f04c3752 (wireguard) was prepared for execution. 2025-05-03 00:36:42.233810 | orchestrator | 2025-05-03 00:36:38 | INFO  | It takes a moment until task 14ca0206-5661-42ea-b7bd-e806f04c3752 (wireguard) has been started and output is visible here. 2025-05-03 00:36:42.234004 | orchestrator | 2025-05-03 00:36:42.234299 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-03 00:36:42.236621 | orchestrator | 2025-05-03 00:36:42.237068 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-03 00:36:42.237102 | orchestrator | Saturday 03 May 2025 00:36:42 +0000 (0:00:00.166) 0:00:00.166 ********** 2025-05-03 00:36:43.750156 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:43.751447 | orchestrator | 2025-05-03 00:36:43.751667 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-03 00:36:43.752184 | orchestrator | Saturday 03 May 2025 00:36:43 +0000 (0:00:01.523) 0:00:01.689 ********** 2025-05-03 00:36:51.097484 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:51.098128 | orchestrator | 2025-05-03 00:36:51.098717 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-03 00:36:51.101839 | orchestrator | Saturday 03 May 2025 00:36:51 +0000 (0:00:07.351) 0:00:09.041 ********** 2025-05-03 00:36:51.668334 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:51.669520 | orchestrator | 2025-05-03 00:36:51.670188 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-03 00:36:51.671825 | orchestrator | Saturday 03 May 2025 00:36:51 +0000 (0:00:00.571) 0:00:09.612 ********** 2025-05-03 00:36:52.084150 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:52.084895 | orchestrator | 2025-05-03 00:36:52.084988 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-03 00:36:52.085830 | orchestrator | Saturday 03 May 2025 00:36:52 +0000 (0:00:00.412) 0:00:10.025 ********** 2025-05-03 00:36:52.600464 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:52.601312 | orchestrator | 2025-05-03 00:36:52.602263 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-03 00:36:52.604247 | orchestrator | Saturday 03 May 2025 00:36:52 +0000 (0:00:00.518) 0:00:10.543 ********** 2025-05-03 00:36:53.114434 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:53.116627 | orchestrator | 2025-05-03 00:36:53.117399 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-03 00:36:53.118232 | orchestrator | Saturday 03 May 2025 00:36:53 +0000 (0:00:00.512) 0:00:11.056 ********** 2025-05-03 00:36:53.529258 | orchestrator | ok: [testbed-manager] 2025-05-03 00:36:53.531145 | orchestrator | 2025-05-03 00:36:53.536985 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-03 00:36:54.676349 | orchestrator | Saturday 03 May 2025 00:36:53 +0000 (0:00:00.415) 0:00:11.471 ********** 2025-05-03 00:36:54.676487 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:54.679311 | orchestrator | 2025-05-03 00:36:54.679588 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-03 00:36:54.680692 | orchestrator | Saturday 03 May 2025 00:36:54 +0000 (0:00:01.146) 0:00:12.617 ********** 2025-05-03 00:36:55.532561 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-03 00:36:55.532784 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:55.533828 | orchestrator | 2025-05-03 00:36:55.535400 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-03 00:36:55.536766 | orchestrator | Saturday 03 May 2025 00:36:55 +0000 (0:00:00.856) 0:00:13.474 ********** 2025-05-03 00:36:57.201596 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:57.201926 | orchestrator | 2025-05-03 00:36:57.203660 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-03 00:36:57.204252 | orchestrator | Saturday 03 May 2025 00:36:57 +0000 (0:00:01.668) 0:00:15.143 ********** 2025-05-03 00:36:58.118638 | orchestrator | changed: [testbed-manager] 2025-05-03 00:36:58.119701 | orchestrator | 2025-05-03 00:36:58.124119 | orchestrator | 2025-05-03 00:36:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:36:58.124203 | orchestrator | 2025-05-03 00:36:58 | INFO  | Please wait and do not abort execution. 2025-05-03 00:36:58.124230 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:36:58.125242 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:36:58.125722 | orchestrator | 2025-05-03 00:36:58.126986 | orchestrator | Saturday 03 May 2025 00:36:58 +0000 (0:00:00.919) 0:00:16.062 ********** 2025-05-03 00:36:58.127769 | orchestrator | =============================================================================== 2025-05-03 00:36:58.128298 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.35s 2025-05-03 00:36:58.129165 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2025-05-03 00:36:58.129527 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.52s 2025-05-03 00:36:58.129992 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2025-05-03 00:36:58.130466 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-05-03 00:36:58.130817 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.86s 2025-05-03 00:36:58.131592 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-05-03 00:36:58.131913 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-05-03 00:36:58.132374 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-05-03 00:36:58.132986 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-05-03 00:36:58.133151 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-05-03 00:36:58.600291 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-03 00:36:58.638290 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-03 00:36:58.736241 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-03 00:36:58.736380 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 152 0 --:--:-- --:--:-- --:--:-- 153 2025-05-03 00:36:58.750662 | orchestrator | + osism apply --environment custom workarounds 2025-05-03 00:37:00.095083 | orchestrator | 2025-05-03 00:37:00 | INFO  | Trying to run play workarounds in environment custom 2025-05-03 00:37:00.140932 | orchestrator | 2025-05-03 00:37:00 | INFO  | Task 9cab8823-0487-410d-8e80-1f903f5792a2 (workarounds) was prepared for execution. 2025-05-03 00:37:03.090234 | orchestrator | 2025-05-03 00:37:00 | INFO  | It takes a moment until task 9cab8823-0487-410d-8e80-1f903f5792a2 (workarounds) has been started and output is visible here. 2025-05-03 00:37:03.090477 | orchestrator | 2025-05-03 00:37:03.090574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:37:03.093516 | orchestrator | 2025-05-03 00:37:03.095336 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-03 00:37:03.095395 | orchestrator | Saturday 03 May 2025 00:37:03 +0000 (0:00:00.135) 0:00:00.135 ********** 2025-05-03 00:37:03.250775 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-03 00:37:03.335805 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-03 00:37:03.430575 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-03 00:37:03.509022 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-03 00:37:03.590504 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-03 00:37:03.831369 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-03 00:37:03.832055 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-03 00:37:03.833283 | orchestrator | 2025-05-03 00:37:03.835062 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-03 00:37:03.835355 | orchestrator | 2025-05-03 00:37:03.837647 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-03 00:37:03.838297 | orchestrator | Saturday 03 May 2025 00:37:03 +0000 (0:00:00.743) 0:00:00.879 ********** 2025-05-03 00:37:06.312092 | orchestrator | ok: [testbed-manager] 2025-05-03 00:37:06.313105 | orchestrator | 2025-05-03 00:37:06.313228 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-03 00:37:06.314767 | orchestrator | 2025-05-03 00:37:06.315290 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-03 00:37:06.315868 | orchestrator | Saturday 03 May 2025 00:37:06 +0000 (0:00:02.475) 0:00:03.354 ********** 2025-05-03 00:37:08.086945 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:37:08.087218 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:37:08.088018 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:37:08.089189 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:37:08.090117 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:37:08.090755 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:37:08.091426 | orchestrator | 2025-05-03 00:37:08.091951 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-03 00:37:08.092516 | orchestrator | 2025-05-03 00:37:08.093030 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-03 00:37:08.093680 | orchestrator | Saturday 03 May 2025 00:37:08 +0000 (0:00:01.776) 0:00:05.130 ********** 2025-05-03 00:37:09.526396 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-03 00:37:09.526625 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-03 00:37:09.527620 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-03 00:37:09.528288 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-03 00:37:09.529744 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-03 00:37:09.530443 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-03 00:37:09.531064 | orchestrator | 2025-05-03 00:37:09.531633 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-03 00:37:09.532173 | orchestrator | Saturday 03 May 2025 00:37:09 +0000 (0:00:01.437) 0:00:06.568 ********** 2025-05-03 00:37:13.264548 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:37:13.264989 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:37:13.266672 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:37:13.268617 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:37:13.269649 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:37:13.270813 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:37:13.271386 | orchestrator | 2025-05-03 00:37:13.272556 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-03 00:37:13.272999 | orchestrator | Saturday 03 May 2025 00:37:13 +0000 (0:00:03.742) 0:00:10.311 ********** 2025-05-03 00:37:13.410393 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:37:13.486479 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:37:13.563934 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:37:13.787451 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:37:13.921832 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:37:13.922442 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:37:13.923779 | orchestrator | 2025-05-03 00:37:13.926744 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-03 00:37:15.527645 | orchestrator | 2025-05-03 00:37:15.528592 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-03 00:37:15.528625 | orchestrator | Saturday 03 May 2025 00:37:13 +0000 (0:00:00.656) 0:00:10.967 ********** 2025-05-03 00:37:15.528659 | orchestrator | changed: [testbed-manager] 2025-05-03 00:37:15.528950 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:37:15.528980 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:37:15.530993 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:37:15.532702 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:37:15.532913 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:37:15.533559 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:37:15.534497 | orchestrator | 2025-05-03 00:37:15.535297 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-03 00:37:15.535967 | orchestrator | Saturday 03 May 2025 00:37:15 +0000 (0:00:01.604) 0:00:12.572 ********** 2025-05-03 00:37:17.142948 | orchestrator | changed: [testbed-manager] 2025-05-03 00:37:17.143134 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:37:17.143288 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:37:17.143796 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:37:17.144289 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:37:17.144546 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:37:17.145000 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:37:17.145379 | orchestrator | 2025-05-03 00:37:17.145715 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-03 00:37:17.146214 | orchestrator | Saturday 03 May 2025 00:37:17 +0000 (0:00:01.610) 0:00:14.183 ********** 2025-05-03 00:37:18.688814 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:37:18.690606 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:37:18.691652 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:37:18.691690 | orchestrator | ok: [testbed-manager] 2025-05-03 00:37:18.691713 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:37:18.693884 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:37:18.694734 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:37:18.695284 | orchestrator | 2025-05-03 00:37:18.696078 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-03 00:37:18.696625 | orchestrator | Saturday 03 May 2025 00:37:18 +0000 (0:00:01.549) 0:00:15.733 ********** 2025-05-03 00:37:20.427206 | orchestrator | changed: [testbed-manager] 2025-05-03 00:37:20.428452 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:37:20.430754 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:37:20.431713 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:37:20.431825 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:37:20.431892 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:37:20.432280 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:37:20.432831 | orchestrator | 2025-05-03 00:37:20.433804 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-03 00:37:20.434323 | orchestrator | Saturday 03 May 2025 00:37:20 +0000 (0:00:01.740) 0:00:17.473 ********** 2025-05-03 00:37:20.577776 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:37:20.651694 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:37:20.726683 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:37:20.803645 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:37:21.020543 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:37:21.155129 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:37:21.155589 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:37:21.156263 | orchestrator | 2025-05-03 00:37:21.157300 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-03 00:37:21.160494 | orchestrator | 2025-05-03 00:37:24.190548 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-03 00:37:24.190665 | orchestrator | Saturday 03 May 2025 00:37:21 +0000 (0:00:00.727) 0:00:18.200 ********** 2025-05-03 00:37:24.190697 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:37:24.190976 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:37:24.191570 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:37:24.193663 | orchestrator | ok: [testbed-manager] 2025-05-03 00:37:24.194299 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:37:24.195202 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:37:24.195292 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:37:24.196479 | orchestrator | 2025-05-03 00:37:24.197165 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:37:24.197337 | orchestrator | 2025-05-03 00:37:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:37:24.197767 | orchestrator | 2025-05-03 00:37:24 | INFO  | Please wait and do not abort execution. 2025-05-03 00:37:24.198933 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:37:24.199334 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:24.200055 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:24.201225 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:24.202325 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:24.203735 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:24.204025 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:24.204601 | orchestrator | 2025-05-03 00:37:24.205218 | orchestrator | Saturday 03 May 2025 00:37:24 +0000 (0:00:03.035) 0:00:21.235 ********** 2025-05-03 00:37:24.206072 | orchestrator | =============================================================================== 2025-05-03 00:37:24.206607 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.74s 2025-05-03 00:37:24.207485 | orchestrator | Install python3-docker -------------------------------------------------- 3.04s 2025-05-03 00:37:24.208192 | orchestrator | Apply netplan configuration --------------------------------------------- 2.48s 2025-05-03 00:37:24.208908 | orchestrator | Apply netplan configuration --------------------------------------------- 1.78s 2025-05-03 00:37:24.209871 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2025-05-03 00:37:24.210302 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-05-03 00:37:24.211259 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2025-05-03 00:37:24.211653 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.55s 2025-05-03 00:37:24.212278 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.44s 2025-05-03 00:37:24.212725 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.74s 2025-05-03 00:37:24.213576 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.73s 2025-05-03 00:37:24.213945 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2025-05-03 00:37:24.735779 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-03 00:37:26.258720 | orchestrator | 2025-05-03 00:37:26 | INFO  | Task a5b48d62-5a60-49a7-8826-29904607b58d (reboot) was prepared for execution. 2025-05-03 00:37:29.234195 | orchestrator | 2025-05-03 00:37:26 | INFO  | It takes a moment until task a5b48d62-5a60-49a7-8826-29904607b58d (reboot) has been started and output is visible here. 2025-05-03 00:37:29.234359 | orchestrator | 2025-05-03 00:37:29.236246 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-03 00:37:29.236355 | orchestrator | 2025-05-03 00:37:29.236388 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-03 00:37:29.236929 | orchestrator | Saturday 03 May 2025 00:37:29 +0000 (0:00:00.140) 0:00:00.140 ********** 2025-05-03 00:37:29.324987 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:37:29.325251 | orchestrator | 2025-05-03 00:37:29.326009 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-03 00:37:29.327481 | orchestrator | Saturday 03 May 2025 00:37:29 +0000 (0:00:00.094) 0:00:00.235 ********** 2025-05-03 00:37:30.231704 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:37:30.232764 | orchestrator | 2025-05-03 00:37:30.233328 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-03 00:37:30.235735 | orchestrator | Saturday 03 May 2025 00:37:30 +0000 (0:00:00.907) 0:00:01.142 ********** 2025-05-03 00:37:30.346367 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:37:30.348299 | orchestrator | 2025-05-03 00:37:30.348359 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-03 00:37:30.349013 | orchestrator | 2025-05-03 00:37:30.349057 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-03 00:37:30.349610 | orchestrator | Saturday 03 May 2025 00:37:30 +0000 (0:00:00.111) 0:00:01.254 ********** 2025-05-03 00:37:30.435796 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:37:30.435953 | orchestrator | 2025-05-03 00:37:30.436825 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-03 00:37:30.438419 | orchestrator | Saturday 03 May 2025 00:37:30 +0000 (0:00:00.092) 0:00:01.346 ********** 2025-05-03 00:37:31.065366 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:37:31.065567 | orchestrator | 2025-05-03 00:37:31.066464 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-03 00:37:31.067911 | orchestrator | Saturday 03 May 2025 00:37:31 +0000 (0:00:00.628) 0:00:01.975 ********** 2025-05-03 00:37:31.168194 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:37:31.168813 | orchestrator | 2025-05-03 00:37:31.169566 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-03 00:37:31.171678 | orchestrator | 2025-05-03 00:37:31.172985 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-03 00:37:31.173288 | orchestrator | Saturday 03 May 2025 00:37:31 +0000 (0:00:00.101) 0:00:02.077 ********** 2025-05-03 00:37:31.253962 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:37:31.254390 | orchestrator | 2025-05-03 00:37:31.254992 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-03 00:37:31.257464 | orchestrator | Saturday 03 May 2025 00:37:31 +0000 (0:00:00.087) 0:00:02.164 ********** 2025-05-03 00:37:32.007956 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:37:32.008538 | orchestrator | 2025-05-03 00:37:32.008604 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-03 00:37:32.010068 | orchestrator | Saturday 03 May 2025 00:37:32 +0000 (0:00:00.752) 0:00:02.917 ********** 2025-05-03 00:37:32.124435 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:37:32.124670 | orchestrator | 2025-05-03 00:37:32.125366 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-03 00:37:32.125480 | orchestrator | 2025-05-03 00:37:32.125812 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-03 00:37:32.126176 | orchestrator | Saturday 03 May 2025 00:37:32 +0000 (0:00:00.110) 0:00:03.027 ********** 2025-05-03 00:37:32.229272 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:37:32.229940 | orchestrator | 2025-05-03 00:37:32.229976 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-03 00:37:32.231589 | orchestrator | Saturday 03 May 2025 00:37:32 +0000 (0:00:00.112) 0:00:03.139 ********** 2025-05-03 00:37:32.893785 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:37:32.895606 | orchestrator | 2025-05-03 00:37:32.896520 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-03 00:37:32.896564 | orchestrator | Saturday 03 May 2025 00:37:32 +0000 (0:00:00.663) 0:00:03.803 ********** 2025-05-03 00:37:33.002529 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:37:33.003129 | orchestrator | 2025-05-03 00:37:33.003996 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-03 00:37:33.005677 | orchestrator | 2025-05-03 00:37:33.095565 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-03 00:37:33.095666 | orchestrator | Saturday 03 May 2025 00:37:32 +0000 (0:00:00.108) 0:00:03.911 ********** 2025-05-03 00:37:33.095698 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:37:33.095760 | orchestrator | 2025-05-03 00:37:33.096499 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-03 00:37:33.096978 | orchestrator | Saturday 03 May 2025 00:37:33 +0000 (0:00:00.095) 0:00:04.006 ********** 2025-05-03 00:37:33.752927 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:37:33.755163 | orchestrator | 2025-05-03 00:37:33.856973 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-03 00:37:33.857087 | orchestrator | Saturday 03 May 2025 00:37:33 +0000 (0:00:00.656) 0:00:04.662 ********** 2025-05-03 00:37:33.857119 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:37:33.858008 | orchestrator | 2025-05-03 00:37:33.858915 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-03 00:37:33.859604 | orchestrator | 2025-05-03 00:37:33.865174 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-03 00:37:33.947044 | orchestrator | Saturday 03 May 2025 00:37:33 +0000 (0:00:00.105) 0:00:04.768 ********** 2025-05-03 00:37:33.947171 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:37:33.947682 | orchestrator | 2025-05-03 00:37:33.948631 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-03 00:37:33.952369 | orchestrator | Saturday 03 May 2025 00:37:33 +0000 (0:00:00.090) 0:00:04.858 ********** 2025-05-03 00:37:34.600619 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:37:34.600964 | orchestrator | 2025-05-03 00:37:34.602172 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-03 00:37:34.603127 | orchestrator | Saturday 03 May 2025 00:37:34 +0000 (0:00:00.650) 0:00:05.509 ********** 2025-05-03 00:37:34.627784 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:37:34.628917 | orchestrator | 2025-05-03 00:37:34.628950 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:37:34.629643 | orchestrator | 2025-05-03 00:37:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:37:34.630758 | orchestrator | 2025-05-03 00:37:34 | INFO  | Please wait and do not abort execution. 2025-05-03 00:37:34.630890 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:34.631732 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:34.632778 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:34.633216 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:34.634216 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:34.635085 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:37:34.635367 | orchestrator | 2025-05-03 00:37:34.635925 | orchestrator | Saturday 03 May 2025 00:37:34 +0000 (0:00:00.029) 0:00:05.539 ********** 2025-05-03 00:37:34.636513 | orchestrator | =============================================================================== 2025-05-03 00:37:34.636919 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.26s 2025-05-03 00:37:34.637402 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.57s 2025-05-03 00:37:34.637769 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2025-05-03 00:37:35.095132 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-03 00:37:36.469970 | orchestrator | 2025-05-03 00:37:36 | INFO  | Task 76c33f32-1389-49ba-823b-cc7e77a458c4 (wait-for-connection) was prepared for execution. 2025-05-03 00:37:39.520759 | orchestrator | 2025-05-03 00:37:36 | INFO  | It takes a moment until task 76c33f32-1389-49ba-823b-cc7e77a458c4 (wait-for-connection) has been started and output is visible here. 2025-05-03 00:37:39.520964 | orchestrator | 2025-05-03 00:37:39.524539 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-03 00:37:51.873599 | orchestrator | 2025-05-03 00:37:51.873783 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-03 00:37:51.873807 | orchestrator | Saturday 03 May 2025 00:37:39 +0000 (0:00:00.167) 0:00:00.167 ********** 2025-05-03 00:37:51.873903 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:37:51.874582 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:37:51.874622 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:37:51.874647 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:37:51.875473 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:37:51.875961 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:37:51.876682 | orchestrator | 2025-05-03 00:37:51.877279 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:37:51.877895 | orchestrator | 2025-05-03 00:37:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:37:51.877990 | orchestrator | 2025-05-03 00:37:51 | INFO  | Please wait and do not abort execution. 2025-05-03 00:37:51.879318 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:37:51.880162 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:37:51.880637 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:37:51.881158 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:37:51.881639 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:37:51.882254 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:37:51.883097 | orchestrator | 2025-05-03 00:37:51.883434 | orchestrator | Saturday 03 May 2025 00:37:51 +0000 (0:00:12.349) 0:00:12.517 ********** 2025-05-03 00:37:51.883968 | orchestrator | =============================================================================== 2025-05-03 00:37:51.884283 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.35s 2025-05-03 00:37:52.317093 | orchestrator | + osism apply hddtemp 2025-05-03 00:37:53.717703 | orchestrator | 2025-05-03 00:37:53 | INFO  | Task 4e86108a-ac8e-4068-bf7f-5c59d4ae5e3e (hddtemp) was prepared for execution. 2025-05-03 00:37:56.794606 | orchestrator | 2025-05-03 00:37:53 | INFO  | It takes a moment until task 4e86108a-ac8e-4068-bf7f-5c59d4ae5e3e (hddtemp) has been started and output is visible here. 2025-05-03 00:37:56.794815 | orchestrator | 2025-05-03 00:37:56.794958 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-03 00:37:56.796192 | orchestrator | 2025-05-03 00:37:56.797823 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-03 00:37:56.799166 | orchestrator | Saturday 03 May 2025 00:37:56 +0000 (0:00:00.193) 0:00:00.193 ********** 2025-05-03 00:37:56.942215 | orchestrator | ok: [testbed-manager] 2025-05-03 00:37:57.017216 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:37:57.091665 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:37:57.166310 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:37:57.240940 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:37:57.470445 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:37:57.470810 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:37:57.471354 | orchestrator | 2025-05-03 00:37:57.472358 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-03 00:37:57.475122 | orchestrator | Saturday 03 May 2025 00:37:57 +0000 (0:00:00.675) 0:00:00.869 ********** 2025-05-03 00:37:58.588181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:37:58.588363 | orchestrator | 2025-05-03 00:37:58.591663 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-03 00:38:00.611489 | orchestrator | Saturday 03 May 2025 00:37:58 +0000 (0:00:01.115) 0:00:01.984 ********** 2025-05-03 00:38:00.611648 | orchestrator | ok: [testbed-manager] 2025-05-03 00:38:00.611944 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:38:00.611977 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:38:00.612032 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:38:00.612937 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:38:00.614273 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:38:00.614727 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:38:00.615679 | orchestrator | 2025-05-03 00:38:00.616690 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-03 00:38:00.616834 | orchestrator | Saturday 03 May 2025 00:38:00 +0000 (0:00:02.026) 0:00:04.011 ********** 2025-05-03 00:38:01.222755 | orchestrator | changed: [testbed-manager] 2025-05-03 00:38:01.306201 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:38:01.726640 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:38:01.727382 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:38:01.728271 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:38:01.729943 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:38:01.730756 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:38:01.731552 | orchestrator | 2025-05-03 00:38:01.732344 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-03 00:38:01.733167 | orchestrator | Saturday 03 May 2025 00:38:01 +0000 (0:00:01.111) 0:00:05.122 ********** 2025-05-03 00:38:03.712789 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:38:03.713532 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:38:03.715651 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:38:03.717582 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:38:03.718295 | orchestrator | ok: [testbed-manager] 2025-05-03 00:38:03.719223 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:38:03.720664 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:38:03.721367 | orchestrator | 2025-05-03 00:38:03.722642 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-03 00:38:03.964598 | orchestrator | Saturday 03 May 2025 00:38:03 +0000 (0:00:01.986) 0:00:07.109 ********** 2025-05-03 00:38:03.964704 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:38:04.050626 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:38:04.144573 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:38:04.245467 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:38:04.370573 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:38:04.374660 | orchestrator | changed: [testbed-manager] 2025-05-03 00:38:04.374732 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:38:04.375532 | orchestrator | 2025-05-03 00:38:04.376372 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-03 00:38:04.378386 | orchestrator | Saturday 03 May 2025 00:38:04 +0000 (0:00:00.655) 0:00:07.764 ********** 2025-05-03 00:38:16.642349 | orchestrator | changed: [testbed-manager] 2025-05-03 00:38:16.642548 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:38:16.642583 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:38:16.643128 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:38:16.643971 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:38:16.646171 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:38:16.646771 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:38:16.646803 | orchestrator | 2025-05-03 00:38:16.646820 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-03 00:38:16.646842 | orchestrator | Saturday 03 May 2025 00:38:16 +0000 (0:00:12.270) 0:00:20.034 ********** 2025-05-03 00:38:17.809775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:38:17.810445 | orchestrator | 2025-05-03 00:38:17.812112 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-03 00:38:19.594888 | orchestrator | Saturday 03 May 2025 00:38:17 +0000 (0:00:01.172) 0:00:21.206 ********** 2025-05-03 00:38:19.595032 | orchestrator | changed: [testbed-manager] 2025-05-03 00:38:19.596642 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:38:19.597000 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:38:19.598356 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:38:19.599585 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:38:19.601298 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:38:19.602438 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:38:19.603487 | orchestrator | 2025-05-03 00:38:19.604683 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:38:19.605263 | orchestrator | 2025-05-03 00:38:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:38:19.613769 | orchestrator | 2025-05-03 00:38:19 | INFO  | Please wait and do not abort execution. 2025-05-03 00:38:19.613885 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:38:19.614395 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:19.614692 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:19.615365 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:19.616093 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:19.616593 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:19.616980 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:19.617009 | orchestrator | 2025-05-03 00:38:19.617390 | orchestrator | Saturday 03 May 2025 00:38:19 +0000 (0:00:01.787) 0:00:22.994 ********** 2025-05-03 00:38:19.617521 | orchestrator | =============================================================================== 2025-05-03 00:38:19.617943 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.27s 2025-05-03 00:38:19.618284 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.03s 2025-05-03 00:38:19.618572 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.99s 2025-05-03 00:38:19.618969 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.79s 2025-05-03 00:38:19.619404 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2025-05-03 00:38:19.619620 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.12s 2025-05-03 00:38:19.620224 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.11s 2025-05-03 00:38:19.620584 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-05-03 00:38:19.620813 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.66s 2025-05-03 00:38:20.141279 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-03 00:38:21.534400 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-03 00:38:21.535265 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-03 00:38:21.535304 | orchestrator | + local max_attempts=60 2025-05-03 00:38:21.535321 | orchestrator | + local name=ceph-ansible 2025-05-03 00:38:21.535337 | orchestrator | + local attempt_num=1 2025-05-03 00:38:21.535361 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-03 00:38:21.568069 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-03 00:38:21.568790 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-03 00:38:21.568822 | orchestrator | + local max_attempts=60 2025-05-03 00:38:21.568841 | orchestrator | + local name=kolla-ansible 2025-05-03 00:38:21.568886 | orchestrator | + local attempt_num=1 2025-05-03 00:38:21.568908 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-03 00:38:21.597334 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-03 00:38:21.597752 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-03 00:38:21.597779 | orchestrator | + local max_attempts=60 2025-05-03 00:38:21.597795 | orchestrator | + local name=osism-ansible 2025-05-03 00:38:21.597810 | orchestrator | + local attempt_num=1 2025-05-03 00:38:21.597832 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-03 00:38:21.623332 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-03 00:38:21.782970 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-03 00:38:21.783098 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-03 00:38:21.783136 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-03 00:38:21.923430 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-03 00:38:22.086901 | orchestrator | ARA in osism-ansible already disabled. 2025-05-03 00:38:22.248965 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-03 00:38:22.250186 | orchestrator | + osism apply gather-facts 2025-05-03 00:38:23.629939 | orchestrator | 2025-05-03 00:38:23 | INFO  | Task c4b55306-45a0-42bf-b70c-efc1614f342f (gather-facts) was prepared for execution. 2025-05-03 00:38:26.670263 | orchestrator | 2025-05-03 00:38:23 | INFO  | It takes a moment until task c4b55306-45a0-42bf-b70c-efc1614f342f (gather-facts) has been started and output is visible here. 2025-05-03 00:38:26.670440 | orchestrator | 2025-05-03 00:38:26.671975 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-03 00:38:26.673952 | orchestrator | 2025-05-03 00:38:26.674337 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-03 00:38:26.675946 | orchestrator | Saturday 03 May 2025 00:38:26 +0000 (0:00:00.159) 0:00:00.159 ********** 2025-05-03 00:38:31.432545 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:38:31.432763 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:38:31.433501 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:38:31.434521 | orchestrator | ok: [testbed-manager] 2025-05-03 00:38:31.435067 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:38:31.438956 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:38:31.441732 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:38:31.445098 | orchestrator | 2025-05-03 00:38:31.445137 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-03 00:38:31.445154 | orchestrator | 2025-05-03 00:38:31.445175 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-03 00:38:31.446517 | orchestrator | Saturday 03 May 2025 00:38:31 +0000 (0:00:04.765) 0:00:04.924 ********** 2025-05-03 00:38:31.584055 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:38:31.657402 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:38:31.737062 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:38:31.814944 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:38:31.893758 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:38:31.938975 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:38:31.939384 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:38:31.940216 | orchestrator | 2025-05-03 00:38:31.940954 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:38:31.941455 | orchestrator | 2025-05-03 00:38:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:38:31.941674 | orchestrator | 2025-05-03 00:38:31 | INFO  | Please wait and do not abort execution. 2025-05-03 00:38:31.942490 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:31.943064 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:31.943791 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:31.944311 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:31.944920 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:31.945296 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:31.946065 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 00:38:31.946336 | orchestrator | 2025-05-03 00:38:31.946777 | orchestrator | Saturday 03 May 2025 00:38:31 +0000 (0:00:00.506) 0:00:05.431 ********** 2025-05-03 00:38:31.947234 | orchestrator | =============================================================================== 2025-05-03 00:38:31.947651 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.77s 2025-05-03 00:38:31.948140 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-05-03 00:38:32.497675 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-03 00:38:32.516333 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-03 00:38:32.535306 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-03 00:38:32.554447 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-03 00:38:32.566913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-03 00:38:32.578316 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-03 00:38:32.593947 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-03 00:38:32.613466 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-03 00:38:32.631402 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-03 00:38:32.642242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-03 00:38:32.658163 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-03 00:38:32.668941 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-03 00:38:32.691267 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-03 00:38:32.709437 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-03 00:38:32.728313 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-03 00:38:32.745602 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-03 00:38:32.763194 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-03 00:38:32.781103 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-03 00:38:32.797101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-03 00:38:32.813422 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-03 00:38:32.827602 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-03 00:38:32.948348 | orchestrator | changed 2025-05-03 00:38:33.025577 | 2025-05-03 00:38:33.025786 | TASK [Deploy services] 2025-05-03 00:38:33.177631 | orchestrator | skipping: Conditional result was False 2025-05-03 00:38:33.189460 | 2025-05-03 00:38:33.189611 | TASK [Deploy in a nutshell] 2025-05-03 00:38:33.944487 | orchestrator | + set -e 2025-05-03 00:38:33.944726 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-03 00:38:33.944760 | orchestrator | ++ export INTERACTIVE=false 2025-05-03 00:38:33.944778 | orchestrator | ++ INTERACTIVE=false 2025-05-03 00:38:33.944822 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-03 00:38:33.944841 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-03 00:38:33.944899 | orchestrator | + source /opt/manager-vars.sh 2025-05-03 00:38:33.944923 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-03 00:38:33.944948 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-03 00:38:33.944965 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-03 00:38:33.944979 | orchestrator | ++ CEPH_VERSION=reef 2025-05-03 00:38:33.944993 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-03 00:38:33.945008 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-03 00:38:33.945022 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-03 00:38:33.945036 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-03 00:38:33.945051 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-03 00:38:33.945065 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-03 00:38:33.945079 | orchestrator | ++ export ARA=false 2025-05-03 00:38:33.945093 | orchestrator | ++ ARA=false 2025-05-03 00:38:33.945107 | orchestrator | ++ export TEMPEST=false 2025-05-03 00:38:33.945121 | orchestrator | ++ TEMPEST=false 2025-05-03 00:38:33.945134 | orchestrator | ++ export IS_ZUUL=true 2025-05-03 00:38:33.945148 | orchestrator | ++ IS_ZUUL=true 2025-05-03 00:38:33.945162 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.136 2025-05-03 00:38:33.945177 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.136 2025-05-03 00:38:33.945191 | orchestrator | ++ export EXTERNAL_API=false 2025-05-03 00:38:33.945206 | orchestrator | ++ EXTERNAL_API=false 2025-05-03 00:38:33.945219 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-03 00:38:33.945237 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-03 00:38:33.945251 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-03 00:38:33.945266 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-03 00:38:33.945289 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-03 00:38:33.946290 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-03 00:38:33.946318 | orchestrator | 2025-05-03 00:38:33.946334 | orchestrator | # PULL IMAGES 2025-05-03 00:38:33.946348 | orchestrator | 2025-05-03 00:38:33.946363 | orchestrator | + echo 2025-05-03 00:38:33.946377 | orchestrator | + echo '# PULL IMAGES' 2025-05-03 00:38:33.946391 | orchestrator | + echo 2025-05-03 00:38:33.946411 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-03 00:38:34.008264 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-03 00:38:35.381001 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-03 00:38:35.382745 | orchestrator | 2025-05-03 00:38:35 | INFO  | Trying to run play pull-images in environment custom 2025-05-03 00:38:35.425790 | orchestrator | 2025-05-03 00:38:35 | INFO  | Task d13b0909-98c7-49ba-af24-156ab384e169 (pull-images) was prepared for execution. 2025-05-03 00:38:38.462923 | orchestrator | 2025-05-03 00:38:35 | INFO  | It takes a moment until task d13b0909-98c7-49ba-af24-156ab384e169 (pull-images) has been started and output is visible here. 2025-05-03 00:38:38.463302 | orchestrator | 2025-05-03 00:38:38.463950 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-03 00:38:38.464018 | orchestrator | 2025-05-03 00:38:38.464608 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-03 00:38:38.464634 | orchestrator | Saturday 03 May 2025 00:38:38 +0000 (0:00:00.137) 0:00:00.137 ********** 2025-05-03 00:39:11.755529 | orchestrator | changed: [testbed-manager] 2025-05-03 00:39:58.831882 | orchestrator | 2025-05-03 00:39:58.832048 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-03 00:39:58.832095 | orchestrator | Saturday 03 May 2025 00:39:11 +0000 (0:00:33.290) 0:00:33.428 ********** 2025-05-03 00:39:58.832129 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-03 00:39:58.833199 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-03 00:39:58.833226 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-03 00:39:58.833241 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-03 00:39:58.833272 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-03 00:39:58.833287 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-03 00:39:58.833302 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-03 00:39:58.833348 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-03 00:39:58.833927 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-03 00:39:58.833952 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-03 00:39:58.833980 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-03 00:39:58.834352 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-03 00:39:58.835053 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-03 00:39:58.839102 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-03 00:39:58.839356 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-03 00:39:58.839468 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-03 00:39:58.839489 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-03 00:39:58.839512 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-03 00:39:58.839536 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-03 00:39:58.839559 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-03 00:39:58.839583 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-03 00:39:58.839607 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-03 00:39:58.839632 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-03 00:39:58.839655 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-03 00:39:58.839679 | orchestrator | 2025-05-03 00:39:58.839823 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:39:58.839888 | orchestrator | 2025-05-03 00:39:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:39:58.839928 | orchestrator | 2025-05-03 00:39:58 | INFO  | Please wait and do not abort execution. 2025-05-03 00:39:58.840045 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:39:58.840594 | orchestrator | 2025-05-03 00:39:58.840964 | orchestrator | Saturday 03 May 2025 00:39:58 +0000 (0:00:47.080) 0:01:20.508 ********** 2025-05-03 00:39:58.841488 | orchestrator | =============================================================================== 2025-05-03 00:39:58.842080 | orchestrator | Pull other images ------------------------------------------------------ 47.08s 2025-05-03 00:39:58.844511 | orchestrator | Pull keystone image ---------------------------------------------------- 33.29s 2025-05-03 00:40:00.867341 | orchestrator | 2025-05-03 00:40:00 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-03 00:40:00.916998 | orchestrator | 2025-05-03 00:40:00 | INFO  | Task 7df96367-62a0-40e8-88e9-0ac26775bb77 (wipe-partitions) was prepared for execution. 2025-05-03 00:40:04.022720 | orchestrator | 2025-05-03 00:40:00 | INFO  | It takes a moment until task 7df96367-62a0-40e8-88e9-0ac26775bb77 (wipe-partitions) has been started and output is visible here. 2025-05-03 00:40:04.022929 | orchestrator | 2025-05-03 00:40:04.024728 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-03 00:40:04.024975 | orchestrator | 2025-05-03 00:40:04.025268 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-03 00:40:04.025585 | orchestrator | Saturday 03 May 2025 00:40:04 +0000 (0:00:00.122) 0:00:00.122 ********** 2025-05-03 00:40:04.689153 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:40:04.691553 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:40:04.692572 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:40:04.693984 | orchestrator | 2025-05-03 00:40:04.859017 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-03 00:40:04.859144 | orchestrator | Saturday 03 May 2025 00:40:04 +0000 (0:00:00.668) 0:00:00.791 ********** 2025-05-03 00:40:04.859180 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:04.960986 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:04.961446 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:40:04.962082 | orchestrator | 2025-05-03 00:40:04.962688 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-03 00:40:04.963449 | orchestrator | Saturday 03 May 2025 00:40:04 +0000 (0:00:00.272) 0:00:01.064 ********** 2025-05-03 00:40:05.718674 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:40:05.719290 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:40:05.719344 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:40:05.719561 | orchestrator | 2025-05-03 00:40:05.720777 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-03 00:40:05.721504 | orchestrator | Saturday 03 May 2025 00:40:05 +0000 (0:00:00.754) 0:00:01.818 ********** 2025-05-03 00:40:05.897803 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:05.996751 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:06.002573 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:40:07.242256 | orchestrator | 2025-05-03 00:40:07.242371 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-03 00:40:07.242390 | orchestrator | Saturday 03 May 2025 00:40:05 +0000 (0:00:00.276) 0:00:02.095 ********** 2025-05-03 00:40:07.242441 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-03 00:40:07.243575 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-03 00:40:07.244079 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-03 00:40:07.244961 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-03 00:40:07.246685 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-03 00:40:07.247589 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-03 00:40:07.250692 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-03 00:40:07.250985 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-03 00:40:07.251472 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-03 00:40:07.252092 | orchestrator | 2025-05-03 00:40:07.252582 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-03 00:40:07.252624 | orchestrator | Saturday 03 May 2025 00:40:07 +0000 (0:00:01.248) 0:00:03.343 ********** 2025-05-03 00:40:08.573483 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-03 00:40:08.573722 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-03 00:40:08.573750 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-03 00:40:08.573765 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-03 00:40:08.573786 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-03 00:40:08.574740 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-03 00:40:08.574779 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-03 00:40:08.575107 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-03 00:40:08.575135 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-03 00:40:08.575150 | orchestrator | 2025-05-03 00:40:08.575172 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-03 00:40:11.588496 | orchestrator | Saturday 03 May 2025 00:40:08 +0000 (0:00:01.331) 0:00:04.675 ********** 2025-05-03 00:40:11.588644 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-03 00:40:11.589710 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-03 00:40:11.590238 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-03 00:40:11.590632 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-03 00:40:11.593577 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-03 00:40:11.593935 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-03 00:40:11.594313 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-03 00:40:11.594556 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-03 00:40:11.594817 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-03 00:40:11.595083 | orchestrator | 2025-05-03 00:40:11.595361 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-03 00:40:11.595830 | orchestrator | Saturday 03 May 2025 00:40:11 +0000 (0:00:03.011) 0:00:07.687 ********** 2025-05-03 00:40:12.230373 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:40:12.233110 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:40:12.233180 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:40:12.233201 | orchestrator | 2025-05-03 00:40:12.233459 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-03 00:40:12.233515 | orchestrator | Saturday 03 May 2025 00:40:12 +0000 (0:00:00.645) 0:00:08.332 ********** 2025-05-03 00:40:12.865448 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:40:12.865805 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:40:12.865891 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:40:12.866217 | orchestrator | 2025-05-03 00:40:12.866814 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:40:12.867144 | orchestrator | 2025-05-03 00:40:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:40:12.867345 | orchestrator | 2025-05-03 00:40:12 | INFO  | Please wait and do not abort execution. 2025-05-03 00:40:12.868126 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:12.868524 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:12.868919 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:12.869571 | orchestrator | 2025-05-03 00:40:12.870616 | orchestrator | Saturday 03 May 2025 00:40:12 +0000 (0:00:00.629) 0:00:08.962 ********** 2025-05-03 00:40:12.870726 | orchestrator | =============================================================================== 2025-05-03 00:40:12.871341 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.01s 2025-05-03 00:40:12.871637 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-05-03 00:40:12.872166 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2025-05-03 00:40:12.872584 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.75s 2025-05-03 00:40:12.873108 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.67s 2025-05-03 00:40:12.873527 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2025-05-03 00:40:12.873902 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-05-03 00:40:12.874402 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-05-03 00:40:12.874736 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-05-03 00:40:14.947737 | orchestrator | 2025-05-03 00:40:14 | INFO  | Task dd735880-51fb-44ae-8d1a-15cad5158c9c (facts) was prepared for execution. 2025-05-03 00:40:17.815542 | orchestrator | 2025-05-03 00:40:14 | INFO  | It takes a moment until task dd735880-51fb-44ae-8d1a-15cad5158c9c (facts) has been started and output is visible here. 2025-05-03 00:40:17.815673 | orchestrator | 2025-05-03 00:40:17.815946 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-03 00:40:17.816420 | orchestrator | 2025-05-03 00:40:17.817108 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-03 00:40:17.817512 | orchestrator | Saturday 03 May 2025 00:40:17 +0000 (0:00:00.177) 0:00:00.177 ********** 2025-05-03 00:40:18.735425 | orchestrator | ok: [testbed-manager] 2025-05-03 00:40:18.735968 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:40:18.736012 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:40:18.736338 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:40:18.739318 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:40:18.739554 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:40:18.740496 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:40:18.742118 | orchestrator | 2025-05-03 00:40:18.743101 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-03 00:40:18.744104 | orchestrator | Saturday 03 May 2025 00:40:18 +0000 (0:00:00.919) 0:00:01.096 ********** 2025-05-03 00:40:18.854447 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:40:18.915039 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:40:18.973832 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:40:19.033586 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:40:19.093241 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:19.638674 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:19.640041 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:40:19.643308 | orchestrator | 2025-05-03 00:40:19.643881 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-03 00:40:19.644502 | orchestrator | 2025-05-03 00:40:19.645352 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-03 00:40:19.649004 | orchestrator | Saturday 03 May 2025 00:40:19 +0000 (0:00:00.905) 0:00:02.002 ********** 2025-05-03 00:40:24.072114 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:40:24.074622 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:40:24.075027 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:40:24.077238 | orchestrator | ok: [testbed-manager] 2025-05-03 00:40:24.078537 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:40:24.078632 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:40:24.079337 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:40:24.080144 | orchestrator | 2025-05-03 00:40:24.080787 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-03 00:40:24.082246 | orchestrator | 2025-05-03 00:40:24.083889 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-03 00:40:24.087760 | orchestrator | Saturday 03 May 2025 00:40:24 +0000 (0:00:04.431) 0:00:06.433 ********** 2025-05-03 00:40:24.483481 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:40:24.559357 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:40:24.640076 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:40:24.720346 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:40:24.803450 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:24.840758 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:24.841436 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:40:24.841493 | orchestrator | 2025-05-03 00:40:24.842914 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:40:24.843416 | orchestrator | 2025-05-03 00:40:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:40:24.843443 | orchestrator | 2025-05-03 00:40:24 | INFO  | Please wait and do not abort execution. 2025-05-03 00:40:24.843464 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:24.844034 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:24.844586 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:24.845041 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:24.845689 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:24.847062 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:24.847966 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:40:24.848032 | orchestrator | 2025-05-03 00:40:24.848056 | orchestrator | Saturday 03 May 2025 00:40:24 +0000 (0:00:00.769) 0:00:07.203 ********** 2025-05-03 00:40:24.848435 | orchestrator | =============================================================================== 2025-05-03 00:40:24.849081 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.43s 2025-05-03 00:40:24.850345 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.92s 2025-05-03 00:40:24.850690 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.91s 2025-05-03 00:40:24.851629 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.77s 2025-05-03 00:40:27.642980 | orchestrator | 2025-05-03 00:40:27 | INFO  | Task 4d48119e-3d78-4227-806b-79e2f2ae5106 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-03 00:40:31.364746 | orchestrator | 2025-05-03 00:40:27 | INFO  | It takes a moment until task 4d48119e-3d78-4227-806b-79e2f2ae5106 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-03 00:40:31.365003 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-03 00:40:31.949984 | orchestrator | 2025-05-03 00:40:31.950375 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-03 00:40:31.951252 | orchestrator | 2025-05-03 00:40:31.966301 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-03 00:40:32.219167 | orchestrator | Saturday 03 May 2025 00:40:31 +0000 (0:00:00.499) 0:00:00.499 ********** 2025-05-03 00:40:32.219296 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-03 00:40:32.220399 | orchestrator | 2025-05-03 00:40:32.220446 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-03 00:40:32.220776 | orchestrator | Saturday 03 May 2025 00:40:32 +0000 (0:00:00.273) 0:00:00.773 ********** 2025-05-03 00:40:32.480985 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:40:32.481417 | orchestrator | 2025-05-03 00:40:32.481903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:32.482864 | orchestrator | Saturday 03 May 2025 00:40:32 +0000 (0:00:00.261) 0:00:01.034 ********** 2025-05-03 00:40:32.952551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-03 00:40:32.956550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-03 00:40:32.957384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-03 00:40:32.957415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-03 00:40:32.957439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-03 00:40:32.957966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-03 00:40:32.958796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-03 00:40:32.959535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-03 00:40:32.960084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-03 00:40:32.960779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-03 00:40:32.961045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-03 00:40:32.961827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-03 00:40:32.962364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-03 00:40:32.963597 | orchestrator | 2025-05-03 00:40:32.964053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:32.964499 | orchestrator | Saturday 03 May 2025 00:40:32 +0000 (0:00:00.469) 0:00:01.504 ********** 2025-05-03 00:40:33.122243 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:33.123210 | orchestrator | 2025-05-03 00:40:33.123244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:33.123265 | orchestrator | Saturday 03 May 2025 00:40:33 +0000 (0:00:00.170) 0:00:01.675 ********** 2025-05-03 00:40:33.299745 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:33.300679 | orchestrator | 2025-05-03 00:40:33.301709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:33.303794 | orchestrator | Saturday 03 May 2025 00:40:33 +0000 (0:00:00.180) 0:00:01.855 ********** 2025-05-03 00:40:33.481368 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:33.482118 | orchestrator | 2025-05-03 00:40:33.483791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:33.484361 | orchestrator | Saturday 03 May 2025 00:40:33 +0000 (0:00:00.181) 0:00:02.037 ********** 2025-05-03 00:40:33.619091 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:33.619997 | orchestrator | 2025-05-03 00:40:33.620041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:33.622775 | orchestrator | Saturday 03 May 2025 00:40:33 +0000 (0:00:00.137) 0:00:02.175 ********** 2025-05-03 00:40:33.780298 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:33.782970 | orchestrator | 2025-05-03 00:40:33.783662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:33.787142 | orchestrator | Saturday 03 May 2025 00:40:33 +0000 (0:00:00.156) 0:00:02.331 ********** 2025-05-03 00:40:33.957203 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:33.957694 | orchestrator | 2025-05-03 00:40:33.957734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:33.959188 | orchestrator | Saturday 03 May 2025 00:40:33 +0000 (0:00:00.179) 0:00:02.511 ********** 2025-05-03 00:40:34.126234 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:34.128322 | orchestrator | 2025-05-03 00:40:34.128370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:34.130005 | orchestrator | Saturday 03 May 2025 00:40:34 +0000 (0:00:00.170) 0:00:02.681 ********** 2025-05-03 00:40:34.318275 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:34.318780 | orchestrator | 2025-05-03 00:40:34.319515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:34.319787 | orchestrator | Saturday 03 May 2025 00:40:34 +0000 (0:00:00.192) 0:00:02.873 ********** 2025-05-03 00:40:34.884164 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c) 2025-05-03 00:40:34.886142 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c) 2025-05-03 00:40:34.887825 | orchestrator | 2025-05-03 00:40:34.889237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:34.890130 | orchestrator | Saturday 03 May 2025 00:40:34 +0000 (0:00:00.566) 0:00:03.439 ********** 2025-05-03 00:40:35.502120 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_60710ea4-1ba5-44da-b34f-cb4cc5f20e97) 2025-05-03 00:40:35.505045 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_60710ea4-1ba5-44da-b34f-cb4cc5f20e97) 2025-05-03 00:40:35.505454 | orchestrator | 2025-05-03 00:40:35.506082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:35.506578 | orchestrator | Saturday 03 May 2025 00:40:35 +0000 (0:00:00.619) 0:00:04.059 ********** 2025-05-03 00:40:35.895715 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_494ae4e2-fb03-468f-bae0-ffa1e1c51b21) 2025-05-03 00:40:35.898717 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_494ae4e2-fb03-468f-bae0-ffa1e1c51b21) 2025-05-03 00:40:35.898765 | orchestrator | 2025-05-03 00:40:35.899082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:35.899151 | orchestrator | Saturday 03 May 2025 00:40:35 +0000 (0:00:00.392) 0:00:04.452 ********** 2025-05-03 00:40:36.258195 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fbc89abf-41e4-403a-af47-fe4d6db2bcc8) 2025-05-03 00:40:36.258699 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fbc89abf-41e4-403a-af47-fe4d6db2bcc8) 2025-05-03 00:40:36.260086 | orchestrator | 2025-05-03 00:40:36.260224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:36.263702 | orchestrator | Saturday 03 May 2025 00:40:36 +0000 (0:00:00.362) 0:00:04.814 ********** 2025-05-03 00:40:36.544605 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-03 00:40:36.544911 | orchestrator | 2025-05-03 00:40:36.545388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:36.882153 | orchestrator | Saturday 03 May 2025 00:40:36 +0000 (0:00:00.283) 0:00:05.098 ********** 2025-05-03 00:40:36.882263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-03 00:40:36.883280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-03 00:40:36.883354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-03 00:40:36.883578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-03 00:40:36.883820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-03 00:40:36.885595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-03 00:40:36.886515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-03 00:40:36.886586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-03 00:40:36.887211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-03 00:40:36.889223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-03 00:40:36.889474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-03 00:40:36.889809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-03 00:40:36.890166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-03 00:40:36.890551 | orchestrator | 2025-05-03 00:40:36.890768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:36.891133 | orchestrator | Saturday 03 May 2025 00:40:36 +0000 (0:00:00.341) 0:00:05.439 ********** 2025-05-03 00:40:37.059152 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:37.059335 | orchestrator | 2025-05-03 00:40:37.059367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:37.060288 | orchestrator | Saturday 03 May 2025 00:40:37 +0000 (0:00:00.175) 0:00:05.615 ********** 2025-05-03 00:40:37.241891 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:37.242141 | orchestrator | 2025-05-03 00:40:37.242233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:37.242661 | orchestrator | Saturday 03 May 2025 00:40:37 +0000 (0:00:00.180) 0:00:05.795 ********** 2025-05-03 00:40:37.418779 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:37.419008 | orchestrator | 2025-05-03 00:40:37.419288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:37.419714 | orchestrator | Saturday 03 May 2025 00:40:37 +0000 (0:00:00.178) 0:00:05.974 ********** 2025-05-03 00:40:37.586132 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:37.586301 | orchestrator | 2025-05-03 00:40:37.586331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:37.586434 | orchestrator | Saturday 03 May 2025 00:40:37 +0000 (0:00:00.168) 0:00:06.142 ********** 2025-05-03 00:40:37.730822 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:37.731011 | orchestrator | 2025-05-03 00:40:37.731739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:37.731909 | orchestrator | Saturday 03 May 2025 00:40:37 +0000 (0:00:00.145) 0:00:06.288 ********** 2025-05-03 00:40:38.168433 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:38.168624 | orchestrator | 2025-05-03 00:40:38.169091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:38.169586 | orchestrator | Saturday 03 May 2025 00:40:38 +0000 (0:00:00.437) 0:00:06.725 ********** 2025-05-03 00:40:38.325181 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:38.328135 | orchestrator | 2025-05-03 00:40:38.328558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:38.328592 | orchestrator | Saturday 03 May 2025 00:40:38 +0000 (0:00:00.156) 0:00:06.882 ********** 2025-05-03 00:40:38.523914 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:38.524726 | orchestrator | 2025-05-03 00:40:38.525266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:38.525300 | orchestrator | Saturday 03 May 2025 00:40:38 +0000 (0:00:00.198) 0:00:07.081 ********** 2025-05-03 00:40:39.175938 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-03 00:40:39.176105 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-03 00:40:39.176653 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-03 00:40:39.177573 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-03 00:40:39.177964 | orchestrator | 2025-05-03 00:40:39.177999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:39.178903 | orchestrator | Saturday 03 May 2025 00:40:39 +0000 (0:00:00.648) 0:00:07.729 ********** 2025-05-03 00:40:39.363045 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:39.367239 | orchestrator | 2025-05-03 00:40:39.581796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:39.582073 | orchestrator | Saturday 03 May 2025 00:40:39 +0000 (0:00:00.189) 0:00:07.919 ********** 2025-05-03 00:40:39.582120 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:39.583434 | orchestrator | 2025-05-03 00:40:39.583504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:39.583530 | orchestrator | Saturday 03 May 2025 00:40:39 +0000 (0:00:00.219) 0:00:08.138 ********** 2025-05-03 00:40:39.744140 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:39.746541 | orchestrator | 2025-05-03 00:40:39.747458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:39.749811 | orchestrator | Saturday 03 May 2025 00:40:39 +0000 (0:00:00.162) 0:00:08.300 ********** 2025-05-03 00:40:39.952919 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:39.953520 | orchestrator | 2025-05-03 00:40:39.953911 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-03 00:40:39.954906 | orchestrator | Saturday 03 May 2025 00:40:39 +0000 (0:00:00.203) 0:00:08.504 ********** 2025-05-03 00:40:40.115456 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-03 00:40:40.115616 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-03 00:40:40.115644 | orchestrator | 2025-05-03 00:40:40.117005 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-03 00:40:40.117358 | orchestrator | Saturday 03 May 2025 00:40:40 +0000 (0:00:00.167) 0:00:08.671 ********** 2025-05-03 00:40:40.236729 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:40.237699 | orchestrator | 2025-05-03 00:40:40.238664 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-03 00:40:40.239305 | orchestrator | Saturday 03 May 2025 00:40:40 +0000 (0:00:00.120) 0:00:08.792 ********** 2025-05-03 00:40:40.486994 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:40.487142 | orchestrator | 2025-05-03 00:40:40.487301 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-03 00:40:40.488088 | orchestrator | Saturday 03 May 2025 00:40:40 +0000 (0:00:00.242) 0:00:09.035 ********** 2025-05-03 00:40:40.594733 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:40.595477 | orchestrator | 2025-05-03 00:40:40.597667 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-03 00:40:40.726519 | orchestrator | Saturday 03 May 2025 00:40:40 +0000 (0:00:00.116) 0:00:09.151 ********** 2025-05-03 00:40:40.726629 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:40:40.730116 | orchestrator | 2025-05-03 00:40:40.730513 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-03 00:40:40.731484 | orchestrator | Saturday 03 May 2025 00:40:40 +0000 (0:00:00.127) 0:00:09.279 ********** 2025-05-03 00:40:40.897915 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eca5292b-8794-515a-ad73-b5efc7970d6a'}}) 2025-05-03 00:40:40.899417 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}}) 2025-05-03 00:40:40.900172 | orchestrator | 2025-05-03 00:40:40.900900 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-03 00:40:40.901606 | orchestrator | Saturday 03 May 2025 00:40:40 +0000 (0:00:00.171) 0:00:09.450 ********** 2025-05-03 00:40:41.035808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eca5292b-8794-515a-ad73-b5efc7970d6a'}})  2025-05-03 00:40:41.036132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}})  2025-05-03 00:40:41.038254 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:41.038458 | orchestrator | 2025-05-03 00:40:41.039061 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-03 00:40:41.042852 | orchestrator | Saturday 03 May 2025 00:40:41 +0000 (0:00:00.141) 0:00:09.592 ********** 2025-05-03 00:40:41.194692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eca5292b-8794-515a-ad73-b5efc7970d6a'}})  2025-05-03 00:40:41.195975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}})  2025-05-03 00:40:41.199008 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:41.201350 | orchestrator | 2025-05-03 00:40:41.202107 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-03 00:40:41.202142 | orchestrator | Saturday 03 May 2025 00:40:41 +0000 (0:00:00.159) 0:00:09.751 ********** 2025-05-03 00:40:41.329428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eca5292b-8794-515a-ad73-b5efc7970d6a'}})  2025-05-03 00:40:41.330334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}})  2025-05-03 00:40:41.331583 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:41.334399 | orchestrator | 2025-05-03 00:40:41.335035 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-03 00:40:41.335389 | orchestrator | Saturday 03 May 2025 00:40:41 +0000 (0:00:00.134) 0:00:09.885 ********** 2025-05-03 00:40:41.458591 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:40:41.458990 | orchestrator | 2025-05-03 00:40:41.459725 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-03 00:40:41.460261 | orchestrator | Saturday 03 May 2025 00:40:41 +0000 (0:00:00.128) 0:00:10.014 ********** 2025-05-03 00:40:41.592930 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:40:41.593473 | orchestrator | 2025-05-03 00:40:41.594347 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-03 00:40:41.594863 | orchestrator | Saturday 03 May 2025 00:40:41 +0000 (0:00:00.135) 0:00:10.149 ********** 2025-05-03 00:40:41.719526 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:41.719662 | orchestrator | 2025-05-03 00:40:41.719991 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-03 00:40:41.720661 | orchestrator | Saturday 03 May 2025 00:40:41 +0000 (0:00:00.126) 0:00:10.276 ********** 2025-05-03 00:40:41.843937 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:41.845295 | orchestrator | 2025-05-03 00:40:41.846430 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-03 00:40:41.847070 | orchestrator | Saturday 03 May 2025 00:40:41 +0000 (0:00:00.123) 0:00:10.400 ********** 2025-05-03 00:40:42.004034 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:42.255640 | orchestrator | 2025-05-03 00:40:42.255734 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-03 00:40:42.255753 | orchestrator | Saturday 03 May 2025 00:40:41 +0000 (0:00:00.157) 0:00:10.557 ********** 2025-05-03 00:40:42.255783 | orchestrator | ok: [testbed-node-3] => { 2025-05-03 00:40:42.257032 | orchestrator |  "ceph_osd_devices": { 2025-05-03 00:40:42.257065 | orchestrator |  "sdb": { 2025-05-03 00:40:42.260814 | orchestrator |  "osd_lvm_uuid": "eca5292b-8794-515a-ad73-b5efc7970d6a" 2025-05-03 00:40:42.261597 | orchestrator |  }, 2025-05-03 00:40:42.262309 | orchestrator |  "sdc": { 2025-05-03 00:40:42.263122 | orchestrator |  "osd_lvm_uuid": "a7a18630-ef35-59a0-a2f0-363b4ab3cd76" 2025-05-03 00:40:42.263999 | orchestrator |  } 2025-05-03 00:40:42.264289 | orchestrator |  } 2025-05-03 00:40:42.265668 | orchestrator | } 2025-05-03 00:40:42.266361 | orchestrator | 2025-05-03 00:40:42.266618 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-03 00:40:42.267024 | orchestrator | Saturday 03 May 2025 00:40:42 +0000 (0:00:00.254) 0:00:10.811 ********** 2025-05-03 00:40:42.389045 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:42.390342 | orchestrator | 2025-05-03 00:40:42.390782 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-03 00:40:42.391050 | orchestrator | Saturday 03 May 2025 00:40:42 +0000 (0:00:00.131) 0:00:10.942 ********** 2025-05-03 00:40:42.516515 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:42.518336 | orchestrator | 2025-05-03 00:40:42.518435 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-03 00:40:42.518957 | orchestrator | Saturday 03 May 2025 00:40:42 +0000 (0:00:00.130) 0:00:11.073 ********** 2025-05-03 00:40:42.643554 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:40:42.643783 | orchestrator | 2025-05-03 00:40:42.644355 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-03 00:40:42.644729 | orchestrator | Saturday 03 May 2025 00:40:42 +0000 (0:00:00.127) 0:00:11.200 ********** 2025-05-03 00:40:42.887602 | orchestrator | changed: [testbed-node-3] => { 2025-05-03 00:40:42.889207 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-03 00:40:42.894496 | orchestrator |  "ceph_osd_devices": { 2025-05-03 00:40:42.894761 | orchestrator |  "sdb": { 2025-05-03 00:40:42.894793 | orchestrator |  "osd_lvm_uuid": "eca5292b-8794-515a-ad73-b5efc7970d6a" 2025-05-03 00:40:42.896202 | orchestrator |  }, 2025-05-03 00:40:42.897135 | orchestrator |  "sdc": { 2025-05-03 00:40:42.897948 | orchestrator |  "osd_lvm_uuid": "a7a18630-ef35-59a0-a2f0-363b4ab3cd76" 2025-05-03 00:40:42.899567 | orchestrator |  } 2025-05-03 00:40:42.900059 | orchestrator |  }, 2025-05-03 00:40:42.900729 | orchestrator |  "lvm_volumes": [ 2025-05-03 00:40:42.900860 | orchestrator |  { 2025-05-03 00:40:42.901426 | orchestrator |  "data": "osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a", 2025-05-03 00:40:42.902512 | orchestrator |  "data_vg": "ceph-eca5292b-8794-515a-ad73-b5efc7970d6a" 2025-05-03 00:40:42.902656 | orchestrator |  }, 2025-05-03 00:40:42.903006 | orchestrator |  { 2025-05-03 00:40:42.903404 | orchestrator |  "data": "osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76", 2025-05-03 00:40:42.904320 | orchestrator |  "data_vg": "ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76" 2025-05-03 00:40:42.905015 | orchestrator |  } 2025-05-03 00:40:42.905666 | orchestrator |  ] 2025-05-03 00:40:42.906133 | orchestrator |  } 2025-05-03 00:40:42.907208 | orchestrator | } 2025-05-03 00:40:42.907731 | orchestrator | 2025-05-03 00:40:42.907948 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-03 00:40:42.909300 | orchestrator | Saturday 03 May 2025 00:40:42 +0000 (0:00:00.240) 0:00:11.441 ********** 2025-05-03 00:40:45.051386 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-03 00:40:45.051507 | orchestrator | 2025-05-03 00:40:45.051892 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-03 00:40:45.052300 | orchestrator | 2025-05-03 00:40:45.052413 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-03 00:40:45.052893 | orchestrator | Saturday 03 May 2025 00:40:45 +0000 (0:00:02.161) 0:00:13.602 ********** 2025-05-03 00:40:45.304144 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-03 00:40:45.304310 | orchestrator | 2025-05-03 00:40:45.304428 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-03 00:40:45.304459 | orchestrator | Saturday 03 May 2025 00:40:45 +0000 (0:00:00.258) 0:00:13.861 ********** 2025-05-03 00:40:45.518578 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:40:45.519436 | orchestrator | 2025-05-03 00:40:45.519599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:45.519969 | orchestrator | Saturday 03 May 2025 00:40:45 +0000 (0:00:00.214) 0:00:14.075 ********** 2025-05-03 00:40:45.847206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-03 00:40:45.847351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-03 00:40:45.847744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-03 00:40:45.849571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-03 00:40:45.850424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-03 00:40:45.853044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-03 00:40:45.854316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-03 00:40:45.855945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-03 00:40:45.856334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-03 00:40:45.859005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-03 00:40:45.859338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-03 00:40:45.859369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-03 00:40:45.862660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-03 00:40:45.862991 | orchestrator | 2025-05-03 00:40:45.863445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:45.863737 | orchestrator | Saturday 03 May 2025 00:40:45 +0000 (0:00:00.323) 0:00:14.399 ********** 2025-05-03 00:40:46.024139 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:46.024712 | orchestrator | 2025-05-03 00:40:46.025353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:46.030211 | orchestrator | Saturday 03 May 2025 00:40:46 +0000 (0:00:00.180) 0:00:14.580 ********** 2025-05-03 00:40:46.207265 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:46.208392 | orchestrator | 2025-05-03 00:40:46.208567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:46.209123 | orchestrator | Saturday 03 May 2025 00:40:46 +0000 (0:00:00.183) 0:00:14.764 ********** 2025-05-03 00:40:46.365966 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:46.366713 | orchestrator | 2025-05-03 00:40:46.367212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:46.367651 | orchestrator | Saturday 03 May 2025 00:40:46 +0000 (0:00:00.158) 0:00:14.923 ********** 2025-05-03 00:40:46.547957 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:46.549152 | orchestrator | 2025-05-03 00:40:46.550762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:46.553803 | orchestrator | Saturday 03 May 2025 00:40:46 +0000 (0:00:00.181) 0:00:15.104 ********** 2025-05-03 00:40:46.990490 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:46.990654 | orchestrator | 2025-05-03 00:40:46.992541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:47.192501 | orchestrator | Saturday 03 May 2025 00:40:46 +0000 (0:00:00.440) 0:00:15.545 ********** 2025-05-03 00:40:47.192582 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:47.193114 | orchestrator | 2025-05-03 00:40:47.195901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:47.196014 | orchestrator | Saturday 03 May 2025 00:40:47 +0000 (0:00:00.202) 0:00:15.748 ********** 2025-05-03 00:40:47.397723 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:47.399147 | orchestrator | 2025-05-03 00:40:47.401934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:47.402061 | orchestrator | Saturday 03 May 2025 00:40:47 +0000 (0:00:00.205) 0:00:15.953 ********** 2025-05-03 00:40:47.571735 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:47.571911 | orchestrator | 2025-05-03 00:40:47.572451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:47.572985 | orchestrator | Saturday 03 May 2025 00:40:47 +0000 (0:00:00.173) 0:00:16.127 ********** 2025-05-03 00:40:47.915132 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66) 2025-05-03 00:40:47.915761 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66) 2025-05-03 00:40:47.916097 | orchestrator | 2025-05-03 00:40:47.916921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:47.917369 | orchestrator | Saturday 03 May 2025 00:40:47 +0000 (0:00:00.344) 0:00:16.471 ********** 2025-05-03 00:40:48.311250 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8a5c9f8e-4062-4859-b774-db2eb35d9068) 2025-05-03 00:40:48.314301 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8a5c9f8e-4062-4859-b774-db2eb35d9068) 2025-05-03 00:40:48.314920 | orchestrator | 2025-05-03 00:40:48.317073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:48.317431 | orchestrator | Saturday 03 May 2025 00:40:48 +0000 (0:00:00.394) 0:00:16.866 ********** 2025-05-03 00:40:48.728577 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6a5303a2-e8ba-422b-a7dc-ef5d91cab650) 2025-05-03 00:40:48.730630 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6a5303a2-e8ba-422b-a7dc-ef5d91cab650) 2025-05-03 00:40:48.731645 | orchestrator | 2025-05-03 00:40:48.735522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:48.735793 | orchestrator | Saturday 03 May 2025 00:40:48 +0000 (0:00:00.418) 0:00:17.284 ********** 2025-05-03 00:40:49.153707 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_78b7c3f7-b361-43c7-bb55-097042834471) 2025-05-03 00:40:49.154430 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_78b7c3f7-b361-43c7-bb55-097042834471) 2025-05-03 00:40:49.157902 | orchestrator | 2025-05-03 00:40:49.158610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:49.159374 | orchestrator | Saturday 03 May 2025 00:40:49 +0000 (0:00:00.422) 0:00:17.706 ********** 2025-05-03 00:40:49.478787 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-03 00:40:49.479046 | orchestrator | 2025-05-03 00:40:49.479602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:49.480096 | orchestrator | Saturday 03 May 2025 00:40:49 +0000 (0:00:00.327) 0:00:18.034 ********** 2025-05-03 00:40:50.092448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-03 00:40:50.093436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-03 00:40:50.093487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-03 00:40:50.094566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-03 00:40:50.094967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-03 00:40:50.096713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-03 00:40:50.097306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-03 00:40:50.099787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-03 00:40:50.100669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-03 00:40:50.103831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-03 00:40:50.104465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-03 00:40:50.107935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-03 00:40:50.108341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-03 00:40:50.108827 | orchestrator | 2025-05-03 00:40:50.109359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:50.109794 | orchestrator | Saturday 03 May 2025 00:40:50 +0000 (0:00:00.613) 0:00:18.647 ********** 2025-05-03 00:40:50.305283 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:50.305523 | orchestrator | 2025-05-03 00:40:50.305774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:50.305815 | orchestrator | Saturday 03 May 2025 00:40:50 +0000 (0:00:00.212) 0:00:18.859 ********** 2025-05-03 00:40:50.505527 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:50.505721 | orchestrator | 2025-05-03 00:40:50.505753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:50.507492 | orchestrator | Saturday 03 May 2025 00:40:50 +0000 (0:00:00.198) 0:00:19.057 ********** 2025-05-03 00:40:50.723713 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:50.724033 | orchestrator | 2025-05-03 00:40:50.724812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:50.724883 | orchestrator | Saturday 03 May 2025 00:40:50 +0000 (0:00:00.222) 0:00:19.280 ********** 2025-05-03 00:40:50.922293 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:50.923125 | orchestrator | 2025-05-03 00:40:50.924099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:50.924730 | orchestrator | Saturday 03 May 2025 00:40:50 +0000 (0:00:00.197) 0:00:19.477 ********** 2025-05-03 00:40:51.136333 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:51.136733 | orchestrator | 2025-05-03 00:40:51.138006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:51.141943 | orchestrator | Saturday 03 May 2025 00:40:51 +0000 (0:00:00.211) 0:00:19.689 ********** 2025-05-03 00:40:51.351492 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:51.351714 | orchestrator | 2025-05-03 00:40:51.352554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:51.353385 | orchestrator | Saturday 03 May 2025 00:40:51 +0000 (0:00:00.217) 0:00:19.907 ********** 2025-05-03 00:40:51.572605 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:51.573454 | orchestrator | 2025-05-03 00:40:51.577111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:51.577830 | orchestrator | Saturday 03 May 2025 00:40:51 +0000 (0:00:00.217) 0:00:20.124 ********** 2025-05-03 00:40:51.780473 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:51.780646 | orchestrator | 2025-05-03 00:40:51.781280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:51.782166 | orchestrator | Saturday 03 May 2025 00:40:51 +0000 (0:00:00.211) 0:00:20.336 ********** 2025-05-03 00:40:52.644358 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-03 00:40:52.644538 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-03 00:40:52.645050 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-03 00:40:52.645384 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-03 00:40:52.645802 | orchestrator | 2025-05-03 00:40:52.647391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:52.648009 | orchestrator | Saturday 03 May 2025 00:40:52 +0000 (0:00:00.861) 0:00:21.197 ********** 2025-05-03 00:40:52.844798 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:52.845909 | orchestrator | 2025-05-03 00:40:52.847007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:52.848105 | orchestrator | Saturday 03 May 2025 00:40:52 +0000 (0:00:00.202) 0:00:21.400 ********** 2025-05-03 00:40:53.524132 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:53.524531 | orchestrator | 2025-05-03 00:40:53.525532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:53.526637 | orchestrator | Saturday 03 May 2025 00:40:53 +0000 (0:00:00.678) 0:00:22.079 ********** 2025-05-03 00:40:53.779159 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:53.780416 | orchestrator | 2025-05-03 00:40:53.780994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:40:53.781300 | orchestrator | Saturday 03 May 2025 00:40:53 +0000 (0:00:00.254) 0:00:22.334 ********** 2025-05-03 00:40:53.994363 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:53.994512 | orchestrator | 2025-05-03 00:40:53.994539 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-03 00:40:53.994913 | orchestrator | Saturday 03 May 2025 00:40:53 +0000 (0:00:00.216) 0:00:22.550 ********** 2025-05-03 00:40:54.176808 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-03 00:40:54.178866 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-03 00:40:54.179793 | orchestrator | 2025-05-03 00:40:54.180753 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-03 00:40:54.181922 | orchestrator | Saturday 03 May 2025 00:40:54 +0000 (0:00:00.180) 0:00:22.730 ********** 2025-05-03 00:40:54.339450 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:54.339624 | orchestrator | 2025-05-03 00:40:54.340253 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-03 00:40:54.341277 | orchestrator | Saturday 03 May 2025 00:40:54 +0000 (0:00:00.161) 0:00:22.891 ********** 2025-05-03 00:40:54.512983 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:54.513102 | orchestrator | 2025-05-03 00:40:54.514669 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-03 00:40:54.515148 | orchestrator | Saturday 03 May 2025 00:40:54 +0000 (0:00:00.176) 0:00:23.067 ********** 2025-05-03 00:40:54.661302 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:54.665075 | orchestrator | 2025-05-03 00:40:54.665946 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-03 00:40:54.666006 | orchestrator | Saturday 03 May 2025 00:40:54 +0000 (0:00:00.146) 0:00:23.214 ********** 2025-05-03 00:40:54.803576 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:40:54.804691 | orchestrator | 2025-05-03 00:40:54.807541 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-03 00:40:54.975128 | orchestrator | Saturday 03 May 2025 00:40:54 +0000 (0:00:00.144) 0:00:23.358 ********** 2025-05-03 00:40:54.975272 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba494882-e80b-5600-bb3d-47da88e10312'}}) 2025-05-03 00:40:54.975919 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1900210e-f5cf-596b-8948-bbf6ca001e1a'}}) 2025-05-03 00:40:54.975989 | orchestrator | 2025-05-03 00:40:54.976593 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-03 00:40:54.976907 | orchestrator | Saturday 03 May 2025 00:40:54 +0000 (0:00:00.169) 0:00:23.527 ********** 2025-05-03 00:40:55.141257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba494882-e80b-5600-bb3d-47da88e10312'}})  2025-05-03 00:40:55.141435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1900210e-f5cf-596b-8948-bbf6ca001e1a'}})  2025-05-03 00:40:55.143588 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:55.144709 | orchestrator | 2025-05-03 00:40:55.146325 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-03 00:40:55.354872 | orchestrator | Saturday 03 May 2025 00:40:55 +0000 (0:00:00.167) 0:00:23.695 ********** 2025-05-03 00:40:55.355000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba494882-e80b-5600-bb3d-47da88e10312'}})  2025-05-03 00:40:55.355068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1900210e-f5cf-596b-8948-bbf6ca001e1a'}})  2025-05-03 00:40:55.355081 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:55.355092 | orchestrator | 2025-05-03 00:40:55.355103 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-03 00:40:55.357581 | orchestrator | Saturday 03 May 2025 00:40:55 +0000 (0:00:00.210) 0:00:23.905 ********** 2025-05-03 00:40:55.774430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba494882-e80b-5600-bb3d-47da88e10312'}})  2025-05-03 00:40:55.774609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1900210e-f5cf-596b-8948-bbf6ca001e1a'}})  2025-05-03 00:40:55.775683 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:55.776685 | orchestrator | 2025-05-03 00:40:55.779164 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-03 00:40:55.919711 | orchestrator | Saturday 03 May 2025 00:40:55 +0000 (0:00:00.422) 0:00:24.327 ********** 2025-05-03 00:40:55.919834 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:40:55.921083 | orchestrator | 2025-05-03 00:40:55.922377 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-03 00:40:55.924948 | orchestrator | Saturday 03 May 2025 00:40:55 +0000 (0:00:00.147) 0:00:24.474 ********** 2025-05-03 00:40:56.063201 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:40:56.064417 | orchestrator | 2025-05-03 00:40:56.064577 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-03 00:40:56.064657 | orchestrator | Saturday 03 May 2025 00:40:56 +0000 (0:00:00.143) 0:00:24.617 ********** 2025-05-03 00:40:56.202285 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:56.202989 | orchestrator | 2025-05-03 00:40:56.203818 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-03 00:40:56.205218 | orchestrator | Saturday 03 May 2025 00:40:56 +0000 (0:00:00.139) 0:00:24.757 ********** 2025-05-03 00:40:56.340030 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:56.340785 | orchestrator | 2025-05-03 00:40:56.341457 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-03 00:40:56.342656 | orchestrator | Saturday 03 May 2025 00:40:56 +0000 (0:00:00.137) 0:00:24.895 ********** 2025-05-03 00:40:56.485097 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:56.485688 | orchestrator | 2025-05-03 00:40:56.486154 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-03 00:40:56.487551 | orchestrator | Saturday 03 May 2025 00:40:56 +0000 (0:00:00.145) 0:00:25.040 ********** 2025-05-03 00:40:56.606403 | orchestrator | ok: [testbed-node-4] => { 2025-05-03 00:40:56.608016 | orchestrator |  "ceph_osd_devices": { 2025-05-03 00:40:56.608421 | orchestrator |  "sdb": { 2025-05-03 00:40:56.609937 | orchestrator |  "osd_lvm_uuid": "ba494882-e80b-5600-bb3d-47da88e10312" 2025-05-03 00:40:56.610693 | orchestrator |  }, 2025-05-03 00:40:56.611947 | orchestrator |  "sdc": { 2025-05-03 00:40:56.612785 | orchestrator |  "osd_lvm_uuid": "1900210e-f5cf-596b-8948-bbf6ca001e1a" 2025-05-03 00:40:56.613387 | orchestrator |  } 2025-05-03 00:40:56.614206 | orchestrator |  } 2025-05-03 00:40:56.614567 | orchestrator | } 2025-05-03 00:40:56.614737 | orchestrator | 2025-05-03 00:40:56.615266 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-03 00:40:56.615383 | orchestrator | Saturday 03 May 2025 00:40:56 +0000 (0:00:00.120) 0:00:25.161 ********** 2025-05-03 00:40:56.748784 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:56.749482 | orchestrator | 2025-05-03 00:40:56.749579 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-03 00:40:56.749981 | orchestrator | Saturday 03 May 2025 00:40:56 +0000 (0:00:00.141) 0:00:25.303 ********** 2025-05-03 00:40:56.887371 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:56.887523 | orchestrator | 2025-05-03 00:40:56.888702 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-03 00:40:56.889518 | orchestrator | Saturday 03 May 2025 00:40:56 +0000 (0:00:00.139) 0:00:25.442 ********** 2025-05-03 00:40:57.035124 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:40:57.036501 | orchestrator | 2025-05-03 00:40:57.036538 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-03 00:40:57.037267 | orchestrator | Saturday 03 May 2025 00:40:57 +0000 (0:00:00.146) 0:00:25.589 ********** 2025-05-03 00:40:57.501080 | orchestrator | changed: [testbed-node-4] => { 2025-05-03 00:40:57.501557 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-03 00:40:57.501790 | orchestrator |  "ceph_osd_devices": { 2025-05-03 00:40:57.502539 | orchestrator |  "sdb": { 2025-05-03 00:40:57.503154 | orchestrator |  "osd_lvm_uuid": "ba494882-e80b-5600-bb3d-47da88e10312" 2025-05-03 00:40:57.503931 | orchestrator |  }, 2025-05-03 00:40:57.504783 | orchestrator |  "sdc": { 2025-05-03 00:40:57.504871 | orchestrator |  "osd_lvm_uuid": "1900210e-f5cf-596b-8948-bbf6ca001e1a" 2025-05-03 00:40:57.505503 | orchestrator |  } 2025-05-03 00:40:57.506286 | orchestrator |  }, 2025-05-03 00:40:57.506732 | orchestrator |  "lvm_volumes": [ 2025-05-03 00:40:57.507313 | orchestrator |  { 2025-05-03 00:40:57.507960 | orchestrator |  "data": "osd-block-ba494882-e80b-5600-bb3d-47da88e10312", 2025-05-03 00:40:57.508443 | orchestrator |  "data_vg": "ceph-ba494882-e80b-5600-bb3d-47da88e10312" 2025-05-03 00:40:57.508761 | orchestrator |  }, 2025-05-03 00:40:57.509387 | orchestrator |  { 2025-05-03 00:40:57.509959 | orchestrator |  "data": "osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a", 2025-05-03 00:40:57.510473 | orchestrator |  "data_vg": "ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a" 2025-05-03 00:40:57.510655 | orchestrator |  } 2025-05-03 00:40:57.510937 | orchestrator |  ] 2025-05-03 00:40:57.511744 | orchestrator |  } 2025-05-03 00:40:57.511901 | orchestrator | } 2025-05-03 00:40:57.511928 | orchestrator | 2025-05-03 00:40:57.512054 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-03 00:40:57.513042 | orchestrator | Saturday 03 May 2025 00:40:57 +0000 (0:00:00.465) 0:00:26.055 ********** 2025-05-03 00:40:58.980282 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-03 00:40:58.980582 | orchestrator | 2025-05-03 00:40:58.981310 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-03 00:40:58.981546 | orchestrator | 2025-05-03 00:40:58.983138 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-03 00:40:58.983531 | orchestrator | Saturday 03 May 2025 00:40:58 +0000 (0:00:01.479) 0:00:27.534 ********** 2025-05-03 00:40:59.229069 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-03 00:40:59.229277 | orchestrator | 2025-05-03 00:40:59.229309 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-03 00:40:59.229773 | orchestrator | Saturday 03 May 2025 00:40:59 +0000 (0:00:00.248) 0:00:27.783 ********** 2025-05-03 00:40:59.474808 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:40:59.475908 | orchestrator | 2025-05-03 00:40:59.476730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:40:59.477618 | orchestrator | Saturday 03 May 2025 00:40:59 +0000 (0:00:00.246) 0:00:28.030 ********** 2025-05-03 00:41:00.007585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-03 00:41:00.008076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-03 00:41:00.009899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-03 00:41:00.012336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-03 00:41:00.012568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-03 00:41:00.013490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-03 00:41:00.014094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-03 00:41:00.014595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-03 00:41:00.015017 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-03 00:41:00.015576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-03 00:41:00.015905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-03 00:41:00.016488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-03 00:41:00.016737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-03 00:41:00.017249 | orchestrator | 2025-05-03 00:41:00.017573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:00.018061 | orchestrator | Saturday 03 May 2025 00:41:00 +0000 (0:00:00.531) 0:00:28.561 ********** 2025-05-03 00:41:00.217600 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:00.217793 | orchestrator | 2025-05-03 00:41:00.218410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:00.219130 | orchestrator | Saturday 03 May 2025 00:41:00 +0000 (0:00:00.209) 0:00:28.771 ********** 2025-05-03 00:41:00.422832 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:00.423717 | orchestrator | 2025-05-03 00:41:00.424606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:00.427165 | orchestrator | Saturday 03 May 2025 00:41:00 +0000 (0:00:00.206) 0:00:28.978 ********** 2025-05-03 00:41:00.621524 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:00.622987 | orchestrator | 2025-05-03 00:41:00.623036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:00.623683 | orchestrator | Saturday 03 May 2025 00:41:00 +0000 (0:00:00.198) 0:00:29.176 ********** 2025-05-03 00:41:00.824981 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:00.826073 | orchestrator | 2025-05-03 00:41:00.826814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:00.829150 | orchestrator | Saturday 03 May 2025 00:41:00 +0000 (0:00:00.203) 0:00:29.379 ********** 2025-05-03 00:41:01.020252 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:01.020956 | orchestrator | 2025-05-03 00:41:01.020998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:01.022221 | orchestrator | Saturday 03 May 2025 00:41:01 +0000 (0:00:00.194) 0:00:29.574 ********** 2025-05-03 00:41:01.223483 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:01.223678 | orchestrator | 2025-05-03 00:41:01.225085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:01.226012 | orchestrator | Saturday 03 May 2025 00:41:01 +0000 (0:00:00.203) 0:00:29.778 ********** 2025-05-03 00:41:01.432588 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:01.432810 | orchestrator | 2025-05-03 00:41:01.433742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:01.434436 | orchestrator | Saturday 03 May 2025 00:41:01 +0000 (0:00:00.209) 0:00:29.987 ********** 2025-05-03 00:41:01.637919 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:01.638307 | orchestrator | 2025-05-03 00:41:01.638359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:01.638399 | orchestrator | Saturday 03 May 2025 00:41:01 +0000 (0:00:00.204) 0:00:30.192 ********** 2025-05-03 00:41:02.307082 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c) 2025-05-03 00:41:03.226393 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c) 2025-05-03 00:41:03.226503 | orchestrator | 2025-05-03 00:41:03.226515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:03.226528 | orchestrator | Saturday 03 May 2025 00:41:02 +0000 (0:00:00.663) 0:00:30.855 ********** 2025-05-03 00:41:03.226550 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_626178b7-dd78-4872-a9d6-22f12232405d) 2025-05-03 00:41:03.226828 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_626178b7-dd78-4872-a9d6-22f12232405d) 2025-05-03 00:41:03.227743 | orchestrator | 2025-05-03 00:41:03.228549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:03.229651 | orchestrator | Saturday 03 May 2025 00:41:03 +0000 (0:00:00.924) 0:00:31.780 ********** 2025-05-03 00:41:03.654463 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cf08c0e7-08ad-4d2d-8710-ce05fc114cf2) 2025-05-03 00:41:03.654709 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cf08c0e7-08ad-4d2d-8710-ce05fc114cf2) 2025-05-03 00:41:03.655730 | orchestrator | 2025-05-03 00:41:03.656790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:03.658480 | orchestrator | Saturday 03 May 2025 00:41:03 +0000 (0:00:00.429) 0:00:32.209 ********** 2025-05-03 00:41:04.097172 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_592c23d3-c323-4834-ad18-db1726824a9d) 2025-05-03 00:41:04.097320 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_592c23d3-c323-4834-ad18-db1726824a9d) 2025-05-03 00:41:04.098595 | orchestrator | 2025-05-03 00:41:04.099960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:04.100968 | orchestrator | Saturday 03 May 2025 00:41:04 +0000 (0:00:00.439) 0:00:32.648 ********** 2025-05-03 00:41:04.423303 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-03 00:41:04.424758 | orchestrator | 2025-05-03 00:41:04.425473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:04.425508 | orchestrator | Saturday 03 May 2025 00:41:04 +0000 (0:00:00.329) 0:00:32.978 ********** 2025-05-03 00:41:04.840140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-03 00:41:04.840412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-03 00:41:04.844174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-03 00:41:04.845338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-03 00:41:04.845412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-03 00:41:04.846589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-03 00:41:04.847556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-03 00:41:04.848776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-03 00:41:04.849443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-03 00:41:04.850181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-03 00:41:04.850426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-03 00:41:04.851007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-03 00:41:04.851673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-03 00:41:04.851944 | orchestrator | 2025-05-03 00:41:04.852369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:04.852893 | orchestrator | Saturday 03 May 2025 00:41:04 +0000 (0:00:00.417) 0:00:33.395 ********** 2025-05-03 00:41:05.032713 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:05.033575 | orchestrator | 2025-05-03 00:41:05.033674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:05.034169 | orchestrator | Saturday 03 May 2025 00:41:05 +0000 (0:00:00.192) 0:00:33.587 ********** 2025-05-03 00:41:05.226949 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:05.227495 | orchestrator | 2025-05-03 00:41:05.228490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:05.229342 | orchestrator | Saturday 03 May 2025 00:41:05 +0000 (0:00:00.193) 0:00:33.781 ********** 2025-05-03 00:41:05.421697 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:05.423825 | orchestrator | 2025-05-03 00:41:05.424398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:05.425114 | orchestrator | Saturday 03 May 2025 00:41:05 +0000 (0:00:00.196) 0:00:33.977 ********** 2025-05-03 00:41:05.620009 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:05.620232 | orchestrator | 2025-05-03 00:41:05.621185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:05.621832 | orchestrator | Saturday 03 May 2025 00:41:05 +0000 (0:00:00.198) 0:00:34.175 ********** 2025-05-03 00:41:05.811147 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:05.811689 | orchestrator | 2025-05-03 00:41:05.812555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:05.813468 | orchestrator | Saturday 03 May 2025 00:41:05 +0000 (0:00:00.190) 0:00:34.365 ********** 2025-05-03 00:41:06.282263 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:06.284544 | orchestrator | 2025-05-03 00:41:06.284608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:06.285186 | orchestrator | Saturday 03 May 2025 00:41:06 +0000 (0:00:00.465) 0:00:34.831 ********** 2025-05-03 00:41:06.490369 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:06.490564 | orchestrator | 2025-05-03 00:41:06.490612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:06.491556 | orchestrator | Saturday 03 May 2025 00:41:06 +0000 (0:00:00.214) 0:00:35.045 ********** 2025-05-03 00:41:06.689150 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:06.689314 | orchestrator | 2025-05-03 00:41:06.690236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:06.691090 | orchestrator | Saturday 03 May 2025 00:41:06 +0000 (0:00:00.198) 0:00:35.244 ********** 2025-05-03 00:41:07.345367 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-03 00:41:07.346283 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-03 00:41:07.348970 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-03 00:41:07.349531 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-03 00:41:07.349561 | orchestrator | 2025-05-03 00:41:07.350303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:07.350798 | orchestrator | Saturday 03 May 2025 00:41:07 +0000 (0:00:00.655) 0:00:35.899 ********** 2025-05-03 00:41:07.559891 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:07.561374 | orchestrator | 2025-05-03 00:41:07.561977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:07.563116 | orchestrator | Saturday 03 May 2025 00:41:07 +0000 (0:00:00.214) 0:00:36.114 ********** 2025-05-03 00:41:07.762766 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:07.764703 | orchestrator | 2025-05-03 00:41:07.765702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:07.766347 | orchestrator | Saturday 03 May 2025 00:41:07 +0000 (0:00:00.203) 0:00:36.317 ********** 2025-05-03 00:41:07.959819 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:07.960504 | orchestrator | 2025-05-03 00:41:07.962332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:07.963421 | orchestrator | Saturday 03 May 2025 00:41:07 +0000 (0:00:00.196) 0:00:36.514 ********** 2025-05-03 00:41:08.175156 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:08.175355 | orchestrator | 2025-05-03 00:41:08.175833 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-03 00:41:08.176364 | orchestrator | Saturday 03 May 2025 00:41:08 +0000 (0:00:00.215) 0:00:36.730 ********** 2025-05-03 00:41:08.364055 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-03 00:41:08.364815 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-03 00:41:08.365379 | orchestrator | 2025-05-03 00:41:08.366171 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-03 00:41:08.366752 | orchestrator | Saturday 03 May 2025 00:41:08 +0000 (0:00:00.187) 0:00:36.918 ********** 2025-05-03 00:41:08.502511 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:08.503107 | orchestrator | 2025-05-03 00:41:08.503183 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-03 00:41:08.503726 | orchestrator | Saturday 03 May 2025 00:41:08 +0000 (0:00:00.139) 0:00:37.058 ********** 2025-05-03 00:41:08.843223 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:08.844488 | orchestrator | 2025-05-03 00:41:08.844547 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-03 00:41:08.845632 | orchestrator | Saturday 03 May 2025 00:41:08 +0000 (0:00:00.340) 0:00:37.398 ********** 2025-05-03 00:41:08.999456 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:09.000193 | orchestrator | 2025-05-03 00:41:09.000247 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-03 00:41:09.001111 | orchestrator | Saturday 03 May 2025 00:41:08 +0000 (0:00:00.156) 0:00:37.554 ********** 2025-05-03 00:41:09.153261 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:41:09.153670 | orchestrator | 2025-05-03 00:41:09.154671 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-03 00:41:09.154993 | orchestrator | Saturday 03 May 2025 00:41:09 +0000 (0:00:00.153) 0:00:37.708 ********** 2025-05-03 00:41:09.357214 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '63c4e6bd-963b-5ec8-a8d0-e52c79716553'}}) 2025-05-03 00:41:09.357391 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0db6d06-6fa6-557d-977f-52f0cf84ead8'}}) 2025-05-03 00:41:09.357408 | orchestrator | 2025-05-03 00:41:09.358213 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-03 00:41:09.358515 | orchestrator | Saturday 03 May 2025 00:41:09 +0000 (0:00:00.201) 0:00:37.910 ********** 2025-05-03 00:41:09.528312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '63c4e6bd-963b-5ec8-a8d0-e52c79716553'}})  2025-05-03 00:41:09.531006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0db6d06-6fa6-557d-977f-52f0cf84ead8'}})  2025-05-03 00:41:09.533152 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:09.533556 | orchestrator | 2025-05-03 00:41:09.533991 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-03 00:41:09.534797 | orchestrator | Saturday 03 May 2025 00:41:09 +0000 (0:00:00.173) 0:00:38.083 ********** 2025-05-03 00:41:09.694205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '63c4e6bd-963b-5ec8-a8d0-e52c79716553'}})  2025-05-03 00:41:09.695007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0db6d06-6fa6-557d-977f-52f0cf84ead8'}})  2025-05-03 00:41:09.698928 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:09.870614 | orchestrator | 2025-05-03 00:41:09.870721 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-03 00:41:09.870741 | orchestrator | Saturday 03 May 2025 00:41:09 +0000 (0:00:00.165) 0:00:38.249 ********** 2025-05-03 00:41:09.870771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '63c4e6bd-963b-5ec8-a8d0-e52c79716553'}})  2025-05-03 00:41:09.870920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0db6d06-6fa6-557d-977f-52f0cf84ead8'}})  2025-05-03 00:41:09.871697 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:09.872250 | orchestrator | 2025-05-03 00:41:09.872720 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-03 00:41:09.873209 | orchestrator | Saturday 03 May 2025 00:41:09 +0000 (0:00:00.176) 0:00:38.426 ********** 2025-05-03 00:41:10.026744 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:41:10.027247 | orchestrator | 2025-05-03 00:41:10.027883 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-03 00:41:10.028467 | orchestrator | Saturday 03 May 2025 00:41:10 +0000 (0:00:00.156) 0:00:38.582 ********** 2025-05-03 00:41:10.181329 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:41:10.181524 | orchestrator | 2025-05-03 00:41:10.182246 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-03 00:41:10.182713 | orchestrator | Saturday 03 May 2025 00:41:10 +0000 (0:00:00.152) 0:00:38.734 ********** 2025-05-03 00:41:10.320416 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:10.320592 | orchestrator | 2025-05-03 00:41:10.321596 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-03 00:41:10.322276 | orchestrator | Saturday 03 May 2025 00:41:10 +0000 (0:00:00.140) 0:00:38.875 ********** 2025-05-03 00:41:10.460476 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:10.461021 | orchestrator | 2025-05-03 00:41:10.461581 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-03 00:41:10.462588 | orchestrator | Saturday 03 May 2025 00:41:10 +0000 (0:00:00.141) 0:00:39.016 ********** 2025-05-03 00:41:10.599551 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:10.600281 | orchestrator | 2025-05-03 00:41:10.600387 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-03 00:41:10.601378 | orchestrator | Saturday 03 May 2025 00:41:10 +0000 (0:00:00.139) 0:00:39.155 ********** 2025-05-03 00:41:10.967202 | orchestrator | ok: [testbed-node-5] => { 2025-05-03 00:41:10.968121 | orchestrator |  "ceph_osd_devices": { 2025-05-03 00:41:10.968468 | orchestrator |  "sdb": { 2025-05-03 00:41:10.968501 | orchestrator |  "osd_lvm_uuid": "63c4e6bd-963b-5ec8-a8d0-e52c79716553" 2025-05-03 00:41:10.969549 | orchestrator |  }, 2025-05-03 00:41:10.969805 | orchestrator |  "sdc": { 2025-05-03 00:41:10.970467 | orchestrator |  "osd_lvm_uuid": "f0db6d06-6fa6-557d-977f-52f0cf84ead8" 2025-05-03 00:41:10.971450 | orchestrator |  } 2025-05-03 00:41:10.971968 | orchestrator |  } 2025-05-03 00:41:10.972585 | orchestrator | } 2025-05-03 00:41:10.972612 | orchestrator | 2025-05-03 00:41:10.972756 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-03 00:41:10.973324 | orchestrator | Saturday 03 May 2025 00:41:10 +0000 (0:00:00.360) 0:00:39.516 ********** 2025-05-03 00:41:11.102904 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:11.103730 | orchestrator | 2025-05-03 00:41:11.105100 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-03 00:41:11.106276 | orchestrator | Saturday 03 May 2025 00:41:11 +0000 (0:00:00.141) 0:00:39.657 ********** 2025-05-03 00:41:11.242292 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:11.242493 | orchestrator | 2025-05-03 00:41:11.243919 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-03 00:41:11.245238 | orchestrator | Saturday 03 May 2025 00:41:11 +0000 (0:00:00.139) 0:00:39.796 ********** 2025-05-03 00:41:11.375931 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:41:11.377181 | orchestrator | 2025-05-03 00:41:11.378302 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-03 00:41:11.379922 | orchestrator | Saturday 03 May 2025 00:41:11 +0000 (0:00:00.132) 0:00:39.928 ********** 2025-05-03 00:41:11.646251 | orchestrator | changed: [testbed-node-5] => { 2025-05-03 00:41:11.646911 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-03 00:41:11.648709 | orchestrator |  "ceph_osd_devices": { 2025-05-03 00:41:11.649989 | orchestrator |  "sdb": { 2025-05-03 00:41:11.651346 | orchestrator |  "osd_lvm_uuid": "63c4e6bd-963b-5ec8-a8d0-e52c79716553" 2025-05-03 00:41:11.652616 | orchestrator |  }, 2025-05-03 00:41:11.654053 | orchestrator |  "sdc": { 2025-05-03 00:41:11.655769 | orchestrator |  "osd_lvm_uuid": "f0db6d06-6fa6-557d-977f-52f0cf84ead8" 2025-05-03 00:41:11.656862 | orchestrator |  } 2025-05-03 00:41:11.657171 | orchestrator |  }, 2025-05-03 00:41:11.657604 | orchestrator |  "lvm_volumes": [ 2025-05-03 00:41:11.658193 | orchestrator |  { 2025-05-03 00:41:11.659046 | orchestrator |  "data": "osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553", 2025-05-03 00:41:11.660100 | orchestrator |  "data_vg": "ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553" 2025-05-03 00:41:11.661558 | orchestrator |  }, 2025-05-03 00:41:11.661780 | orchestrator |  { 2025-05-03 00:41:11.663169 | orchestrator |  "data": "osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8", 2025-05-03 00:41:11.663312 | orchestrator |  "data_vg": "ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8" 2025-05-03 00:41:11.664112 | orchestrator |  } 2025-05-03 00:41:11.665018 | orchestrator |  ] 2025-05-03 00:41:11.665522 | orchestrator |  } 2025-05-03 00:41:11.666134 | orchestrator | } 2025-05-03 00:41:11.666753 | orchestrator | 2025-05-03 00:41:11.667602 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-03 00:41:11.667963 | orchestrator | Saturday 03 May 2025 00:41:11 +0000 (0:00:00.271) 0:00:40.200 ********** 2025-05-03 00:41:12.784613 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-03 00:41:12.785374 | orchestrator | 2025-05-03 00:41:12.787217 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:41:12.787347 | orchestrator | 2025-05-03 00:41:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:41:12.787664 | orchestrator | 2025-05-03 00:41:12 | INFO  | Please wait and do not abort execution. 2025-05-03 00:41:12.787712 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-03 00:41:12.789249 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-03 00:41:12.789610 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-03 00:41:12.790420 | orchestrator | 2025-05-03 00:41:12.791303 | orchestrator | 2025-05-03 00:41:12.792337 | orchestrator | 2025-05-03 00:41:12.792895 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:41:12.793913 | orchestrator | Saturday 03 May 2025 00:41:12 +0000 (0:00:01.137) 0:00:41.338 ********** 2025-05-03 00:41:12.794308 | orchestrator | =============================================================================== 2025-05-03 00:41:12.794745 | orchestrator | Write configuration file ------------------------------------------------ 4.78s 2025-05-03 00:41:12.795909 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2025-05-03 00:41:12.796615 | orchestrator | Add known links to the list of available block devices ------------------ 1.32s 2025-05-03 00:41:12.797470 | orchestrator | Print configuration data ------------------------------------------------ 0.98s 2025-05-03 00:41:12.797907 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2025-05-03 00:41:12.798746 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-05-03 00:41:12.800604 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-05-03 00:41:12.801126 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.76s 2025-05-03 00:41:12.801698 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.74s 2025-05-03 00:41:12.802363 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.73s 2025-05-03 00:41:12.802792 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2025-05-03 00:41:12.803393 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-05-03 00:41:12.803914 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-05-03 00:41:12.804398 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-05-03 00:41:12.804757 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-03 00:41:12.805375 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-05-03 00:41:12.805713 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-05-03 00:41:12.806203 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.54s 2025-05-03 00:41:12.806650 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.54s 2025-05-03 00:41:12.807434 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.54s 2025-05-03 00:41:24.917295 | orchestrator | 2025-05-03 00:41:24 | INFO  | Task 112bc129-bd17-4445-af0f-4dbd094e3033 is running in background. Output coming soon. 2025-05-03 00:41:47.780689 | orchestrator | 2025-05-03 00:41:39 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-03 00:41:49.383413 | orchestrator | 2025-05-03 00:41:39 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-03 00:41:49.383532 | orchestrator | 2025-05-03 00:41:39 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-03 00:41:49.383554 | orchestrator | 2025-05-03 00:41:39 | INFO  | Handling group overwrites in 99-overwrite 2025-05-03 00:41:49.383584 | orchestrator | 2025-05-03 00:41:39 | INFO  | Removing group frr:children from 60-generic 2025-05-03 00:41:49.383600 | orchestrator | 2025-05-03 00:41:39 | INFO  | Removing group storage:children from 50-kolla 2025-05-03 00:41:49.383629 | orchestrator | 2025-05-03 00:41:39 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-03 00:41:49.383646 | orchestrator | 2025-05-03 00:41:39 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-03 00:41:49.383662 | orchestrator | 2025-05-03 00:41:39 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-03 00:41:49.383677 | orchestrator | 2025-05-03 00:41:39 | INFO  | Handling group overwrites in 20-roles 2025-05-03 00:41:49.383693 | orchestrator | 2025-05-03 00:41:39 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-03 00:41:49.383709 | orchestrator | 2025-05-03 00:41:40 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-03 00:41:49.383724 | orchestrator | 2025-05-03 00:41:47 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-03 00:41:49.383756 | orchestrator | 2025-05-03 00:41:49 | INFO  | Task 9128aa27-06a6-434e-8512-24b675125339 (ceph-create-lvm-devices) was prepared for execution. 2025-05-03 00:41:52.332066 | orchestrator | 2025-05-03 00:41:49 | INFO  | It takes a moment until task 9128aa27-06a6-434e-8512-24b675125339 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-03 00:41:52.332275 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-03 00:41:52.832905 | orchestrator | 2025-05-03 00:41:52.834101 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-03 00:41:52.835290 | orchestrator | 2025-05-03 00:41:52.835617 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-03 00:41:52.837060 | orchestrator | Saturday 03 May 2025 00:41:52 +0000 (0:00:00.435) 0:00:00.435 ********** 2025-05-03 00:41:53.080465 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-03 00:41:53.081139 | orchestrator | 2025-05-03 00:41:53.082906 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-03 00:41:53.085608 | orchestrator | Saturday 03 May 2025 00:41:53 +0000 (0:00:00.247) 0:00:00.682 ********** 2025-05-03 00:41:53.323267 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:41:53.325620 | orchestrator | 2025-05-03 00:41:53.328460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:53.328525 | orchestrator | Saturday 03 May 2025 00:41:53 +0000 (0:00:00.244) 0:00:00.926 ********** 2025-05-03 00:41:54.101198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-03 00:41:54.103758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-03 00:41:54.103902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-03 00:41:54.103931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-03 00:41:54.103951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-03 00:41:54.108810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-03 00:41:54.112900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-03 00:41:54.112972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-03 00:41:54.112999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-03 00:41:54.113058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-03 00:41:54.113083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-03 00:41:54.113121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-03 00:41:54.114184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-03 00:41:54.116923 | orchestrator | 2025-05-03 00:41:54.117026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:54.117312 | orchestrator | Saturday 03 May 2025 00:41:54 +0000 (0:00:00.776) 0:00:01.703 ********** 2025-05-03 00:41:54.314402 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:54.314888 | orchestrator | 2025-05-03 00:41:54.316557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:54.318196 | orchestrator | Saturday 03 May 2025 00:41:54 +0000 (0:00:00.214) 0:00:01.918 ********** 2025-05-03 00:41:54.544906 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:54.546225 | orchestrator | 2025-05-03 00:41:54.546349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:54.546598 | orchestrator | Saturday 03 May 2025 00:41:54 +0000 (0:00:00.224) 0:00:02.142 ********** 2025-05-03 00:41:54.791591 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:54.797966 | orchestrator | 2025-05-03 00:41:54.804656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:54.806608 | orchestrator | Saturday 03 May 2025 00:41:54 +0000 (0:00:00.249) 0:00:02.391 ********** 2025-05-03 00:41:54.999797 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:55.002134 | orchestrator | 2025-05-03 00:41:55.002725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:55.003226 | orchestrator | Saturday 03 May 2025 00:41:54 +0000 (0:00:00.211) 0:00:02.603 ********** 2025-05-03 00:41:55.207945 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:55.208356 | orchestrator | 2025-05-03 00:41:55.208402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:55.209502 | orchestrator | Saturday 03 May 2025 00:41:55 +0000 (0:00:00.203) 0:00:02.806 ********** 2025-05-03 00:41:55.400812 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:55.401512 | orchestrator | 2025-05-03 00:41:55.402506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:55.403462 | orchestrator | Saturday 03 May 2025 00:41:55 +0000 (0:00:00.197) 0:00:03.003 ********** 2025-05-03 00:41:55.613967 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:55.614289 | orchestrator | 2025-05-03 00:41:55.614328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:55.617486 | orchestrator | Saturday 03 May 2025 00:41:55 +0000 (0:00:00.212) 0:00:03.216 ********** 2025-05-03 00:41:55.829587 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:55.831713 | orchestrator | 2025-05-03 00:41:55.833481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:55.833520 | orchestrator | Saturday 03 May 2025 00:41:55 +0000 (0:00:00.215) 0:00:03.432 ********** 2025-05-03 00:41:56.447349 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c) 2025-05-03 00:41:56.447597 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c) 2025-05-03 00:41:56.448727 | orchestrator | 2025-05-03 00:41:56.449306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:56.449931 | orchestrator | Saturday 03 May 2025 00:41:56 +0000 (0:00:00.617) 0:00:04.049 ********** 2025-05-03 00:41:57.263652 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_60710ea4-1ba5-44da-b34f-cb4cc5f20e97) 2025-05-03 00:41:57.265139 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_60710ea4-1ba5-44da-b34f-cb4cc5f20e97) 2025-05-03 00:41:57.265639 | orchestrator | 2025-05-03 00:41:57.266558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:57.267435 | orchestrator | Saturday 03 May 2025 00:41:57 +0000 (0:00:00.816) 0:00:04.865 ********** 2025-05-03 00:41:57.735368 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_494ae4e2-fb03-468f-bae0-ffa1e1c51b21) 2025-05-03 00:41:57.737182 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_494ae4e2-fb03-468f-bae0-ffa1e1c51b21) 2025-05-03 00:41:57.737443 | orchestrator | 2025-05-03 00:41:57.738364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:57.740175 | orchestrator | Saturday 03 May 2025 00:41:57 +0000 (0:00:00.471) 0:00:05.337 ********** 2025-05-03 00:41:58.177098 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fbc89abf-41e4-403a-af47-fe4d6db2bcc8) 2025-05-03 00:41:58.178069 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fbc89abf-41e4-403a-af47-fe4d6db2bcc8) 2025-05-03 00:41:58.178900 | orchestrator | 2025-05-03 00:41:58.179058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:41:58.179459 | orchestrator | Saturday 03 May 2025 00:41:58 +0000 (0:00:00.438) 0:00:05.776 ********** 2025-05-03 00:41:58.490960 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-03 00:41:58.492726 | orchestrator | 2025-05-03 00:41:58.493018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:58.493072 | orchestrator | Saturday 03 May 2025 00:41:58 +0000 (0:00:00.316) 0:00:06.093 ********** 2025-05-03 00:41:58.966306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-03 00:41:58.969960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-03 00:41:58.971291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-03 00:41:58.971324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-03 00:41:58.975577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-03 00:41:58.976781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-03 00:41:58.976810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-03 00:41:58.976830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-03 00:41:58.977461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-03 00:41:58.979521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-03 00:41:58.980002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-03 00:41:58.980387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-03 00:41:58.981196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-03 00:41:58.981365 | orchestrator | 2025-05-03 00:41:58.981789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:58.983683 | orchestrator | Saturday 03 May 2025 00:41:58 +0000 (0:00:00.475) 0:00:06.568 ********** 2025-05-03 00:41:59.167608 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:59.171746 | orchestrator | 2025-05-03 00:41:59.171822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:59.363929 | orchestrator | Saturday 03 May 2025 00:41:59 +0000 (0:00:00.200) 0:00:06.769 ********** 2025-05-03 00:41:59.364062 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:59.364212 | orchestrator | 2025-05-03 00:41:59.364605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:59.364645 | orchestrator | Saturday 03 May 2025 00:41:59 +0000 (0:00:00.197) 0:00:06.966 ********** 2025-05-03 00:41:59.575050 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:59.575290 | orchestrator | 2025-05-03 00:41:59.576443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:59.578575 | orchestrator | Saturday 03 May 2025 00:41:59 +0000 (0:00:00.210) 0:00:07.177 ********** 2025-05-03 00:41:59.772147 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:41:59.772517 | orchestrator | 2025-05-03 00:41:59.773774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:41:59.774516 | orchestrator | Saturday 03 May 2025 00:41:59 +0000 (0:00:00.196) 0:00:07.373 ********** 2025-05-03 00:42:00.323630 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:00.323751 | orchestrator | 2025-05-03 00:42:00.324674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:00.325752 | orchestrator | Saturday 03 May 2025 00:42:00 +0000 (0:00:00.551) 0:00:07.924 ********** 2025-05-03 00:42:00.519338 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:00.519481 | orchestrator | 2025-05-03 00:42:00.520401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:00.521406 | orchestrator | Saturday 03 May 2025 00:42:00 +0000 (0:00:00.198) 0:00:08.123 ********** 2025-05-03 00:42:00.710256 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:00.710405 | orchestrator | 2025-05-03 00:42:00.710939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:00.711712 | orchestrator | Saturday 03 May 2025 00:42:00 +0000 (0:00:00.190) 0:00:08.313 ********** 2025-05-03 00:42:00.903132 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:00.903635 | orchestrator | 2025-05-03 00:42:00.904893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:00.906216 | orchestrator | Saturday 03 May 2025 00:42:00 +0000 (0:00:00.191) 0:00:08.505 ********** 2025-05-03 00:42:01.524488 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-03 00:42:01.525052 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-03 00:42:01.525962 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-03 00:42:01.526874 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-03 00:42:01.527736 | orchestrator | 2025-05-03 00:42:01.528435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:01.529359 | orchestrator | Saturday 03 May 2025 00:42:01 +0000 (0:00:00.620) 0:00:09.126 ********** 2025-05-03 00:42:01.716251 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:01.717373 | orchestrator | 2025-05-03 00:42:01.718381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:01.719073 | orchestrator | Saturday 03 May 2025 00:42:01 +0000 (0:00:00.193) 0:00:09.320 ********** 2025-05-03 00:42:01.912607 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:01.913158 | orchestrator | 2025-05-03 00:42:01.917202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:01.918331 | orchestrator | Saturday 03 May 2025 00:42:01 +0000 (0:00:00.195) 0:00:09.515 ********** 2025-05-03 00:42:02.109359 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:02.109531 | orchestrator | 2025-05-03 00:42:02.109561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:02.110124 | orchestrator | Saturday 03 May 2025 00:42:02 +0000 (0:00:00.197) 0:00:09.713 ********** 2025-05-03 00:42:02.312423 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:02.313046 | orchestrator | 2025-05-03 00:42:02.315531 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-03 00:42:02.445233 | orchestrator | Saturday 03 May 2025 00:42:02 +0000 (0:00:00.201) 0:00:09.914 ********** 2025-05-03 00:42:02.445338 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:02.446111 | orchestrator | 2025-05-03 00:42:02.446962 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-03 00:42:02.447964 | orchestrator | Saturday 03 May 2025 00:42:02 +0000 (0:00:00.134) 0:00:10.049 ********** 2025-05-03 00:42:02.645305 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eca5292b-8794-515a-ad73-b5efc7970d6a'}}) 2025-05-03 00:42:02.645666 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}}) 2025-05-03 00:42:02.646101 | orchestrator | 2025-05-03 00:42:02.646396 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-03 00:42:02.646870 | orchestrator | Saturday 03 May 2025 00:42:02 +0000 (0:00:00.199) 0:00:10.248 ********** 2025-05-03 00:42:04.884605 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'}) 2025-05-03 00:42:04.886367 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}) 2025-05-03 00:42:04.886994 | orchestrator | 2025-05-03 00:42:04.887086 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-03 00:42:04.887113 | orchestrator | Saturday 03 May 2025 00:42:04 +0000 (0:00:02.236) 0:00:12.484 ********** 2025-05-03 00:42:05.056324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:05.057104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:05.057159 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:05.057949 | orchestrator | 2025-05-03 00:42:05.058782 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-03 00:42:05.059740 | orchestrator | Saturday 03 May 2025 00:42:05 +0000 (0:00:00.174) 0:00:12.659 ********** 2025-05-03 00:42:06.555720 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'}) 2025-05-03 00:42:06.556182 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}) 2025-05-03 00:42:06.559027 | orchestrator | 2025-05-03 00:42:06.559991 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-03 00:42:06.560030 | orchestrator | Saturday 03 May 2025 00:42:06 +0000 (0:00:01.497) 0:00:14.156 ********** 2025-05-03 00:42:06.721389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:06.722194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:06.723148 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:06.724126 | orchestrator | 2025-05-03 00:42:06.726568 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-03 00:42:06.882962 | orchestrator | Saturday 03 May 2025 00:42:06 +0000 (0:00:00.167) 0:00:14.324 ********** 2025-05-03 00:42:06.883118 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:06.883188 | orchestrator | 2025-05-03 00:42:06.883927 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-03 00:42:06.886516 | orchestrator | Saturday 03 May 2025 00:42:06 +0000 (0:00:00.160) 0:00:14.485 ********** 2025-05-03 00:42:07.061167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:07.062441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:07.063126 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:07.063876 | orchestrator | 2025-05-03 00:42:07.066792 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-03 00:42:07.204961 | orchestrator | Saturday 03 May 2025 00:42:07 +0000 (0:00:00.177) 0:00:14.662 ********** 2025-05-03 00:42:07.205101 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:07.205791 | orchestrator | 2025-05-03 00:42:07.206817 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-03 00:42:07.207737 | orchestrator | Saturday 03 May 2025 00:42:07 +0000 (0:00:00.145) 0:00:14.808 ********** 2025-05-03 00:42:07.401926 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:07.402531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:07.403372 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:07.404594 | orchestrator | 2025-05-03 00:42:07.407284 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-03 00:42:07.744443 | orchestrator | Saturday 03 May 2025 00:42:07 +0000 (0:00:00.196) 0:00:15.004 ********** 2025-05-03 00:42:07.744576 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:07.746414 | orchestrator | 2025-05-03 00:42:07.746797 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-03 00:42:07.747629 | orchestrator | Saturday 03 May 2025 00:42:07 +0000 (0:00:00.343) 0:00:15.347 ********** 2025-05-03 00:42:07.905078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:07.905872 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:07.906885 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:07.907523 | orchestrator | 2025-05-03 00:42:07.908258 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-03 00:42:07.909016 | orchestrator | Saturday 03 May 2025 00:42:07 +0000 (0:00:00.160) 0:00:15.508 ********** 2025-05-03 00:42:08.047962 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:42:08.048214 | orchestrator | 2025-05-03 00:42:08.048903 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-03 00:42:08.049351 | orchestrator | Saturday 03 May 2025 00:42:08 +0000 (0:00:00.141) 0:00:15.650 ********** 2025-05-03 00:42:08.218965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:08.219725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:08.220407 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:08.221142 | orchestrator | 2025-05-03 00:42:08.221677 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-03 00:42:08.222486 | orchestrator | Saturday 03 May 2025 00:42:08 +0000 (0:00:00.172) 0:00:15.822 ********** 2025-05-03 00:42:08.392478 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:08.392772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:08.394182 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:08.394566 | orchestrator | 2025-05-03 00:42:08.395420 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-03 00:42:08.396389 | orchestrator | Saturday 03 May 2025 00:42:08 +0000 (0:00:00.172) 0:00:15.995 ********** 2025-05-03 00:42:08.560401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:08.560604 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:08.561989 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:08.564439 | orchestrator | 2025-05-03 00:42:08.691177 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-03 00:42:08.691292 | orchestrator | Saturday 03 May 2025 00:42:08 +0000 (0:00:00.167) 0:00:16.163 ********** 2025-05-03 00:42:08.691326 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:08.691428 | orchestrator | 2025-05-03 00:42:08.692114 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-03 00:42:08.692935 | orchestrator | Saturday 03 May 2025 00:42:08 +0000 (0:00:00.130) 0:00:16.293 ********** 2025-05-03 00:42:08.833369 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:08.833570 | orchestrator | 2025-05-03 00:42:08.835269 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-03 00:42:08.835830 | orchestrator | Saturday 03 May 2025 00:42:08 +0000 (0:00:00.137) 0:00:16.431 ********** 2025-05-03 00:42:08.950694 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:08.951315 | orchestrator | 2025-05-03 00:42:08.951350 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-03 00:42:08.951394 | orchestrator | Saturday 03 May 2025 00:42:08 +0000 (0:00:00.123) 0:00:16.554 ********** 2025-05-03 00:42:09.100716 | orchestrator | ok: [testbed-node-3] => { 2025-05-03 00:42:09.102488 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-03 00:42:09.103000 | orchestrator | } 2025-05-03 00:42:09.104401 | orchestrator | 2025-05-03 00:42:09.105027 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-03 00:42:09.106425 | orchestrator | Saturday 03 May 2025 00:42:09 +0000 (0:00:00.149) 0:00:16.703 ********** 2025-05-03 00:42:09.242257 | orchestrator | ok: [testbed-node-3] => { 2025-05-03 00:42:09.242716 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-03 00:42:09.243519 | orchestrator | } 2025-05-03 00:42:09.243982 | orchestrator | 2025-05-03 00:42:09.244451 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-03 00:42:09.245137 | orchestrator | Saturday 03 May 2025 00:42:09 +0000 (0:00:00.140) 0:00:16.844 ********** 2025-05-03 00:42:09.387769 | orchestrator | ok: [testbed-node-3] => { 2025-05-03 00:42:09.388975 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-03 00:42:09.389502 | orchestrator | } 2025-05-03 00:42:09.390730 | orchestrator | 2025-05-03 00:42:09.391456 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-03 00:42:09.392219 | orchestrator | Saturday 03 May 2025 00:42:09 +0000 (0:00:00.144) 0:00:16.988 ********** 2025-05-03 00:42:10.366930 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:42:10.367797 | orchestrator | 2025-05-03 00:42:10.368772 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-03 00:42:10.369740 | orchestrator | Saturday 03 May 2025 00:42:10 +0000 (0:00:00.979) 0:00:17.968 ********** 2025-05-03 00:42:10.884486 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:42:10.884664 | orchestrator | 2025-05-03 00:42:10.886179 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-03 00:42:10.887103 | orchestrator | Saturday 03 May 2025 00:42:10 +0000 (0:00:00.517) 0:00:18.486 ********** 2025-05-03 00:42:11.408503 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:42:11.409221 | orchestrator | 2025-05-03 00:42:11.410306 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-03 00:42:11.411202 | orchestrator | Saturday 03 May 2025 00:42:11 +0000 (0:00:00.523) 0:00:19.010 ********** 2025-05-03 00:42:11.558396 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:42:11.558823 | orchestrator | 2025-05-03 00:42:11.560062 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-03 00:42:11.561234 | orchestrator | Saturday 03 May 2025 00:42:11 +0000 (0:00:00.150) 0:00:19.161 ********** 2025-05-03 00:42:11.670168 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:11.671180 | orchestrator | 2025-05-03 00:42:11.671233 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-03 00:42:11.671899 | orchestrator | Saturday 03 May 2025 00:42:11 +0000 (0:00:00.111) 0:00:19.273 ********** 2025-05-03 00:42:11.796669 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:11.796995 | orchestrator | 2025-05-03 00:42:11.797159 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-03 00:42:11.797780 | orchestrator | Saturday 03 May 2025 00:42:11 +0000 (0:00:00.127) 0:00:19.400 ********** 2025-05-03 00:42:11.933817 | orchestrator | ok: [testbed-node-3] => { 2025-05-03 00:42:11.934602 | orchestrator |  "vgs_report": { 2025-05-03 00:42:11.937083 | orchestrator |  "vg": [] 2025-05-03 00:42:11.938393 | orchestrator |  } 2025-05-03 00:42:11.939309 | orchestrator | } 2025-05-03 00:42:11.939350 | orchestrator | 2025-05-03 00:42:11.940472 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-03 00:42:11.941145 | orchestrator | Saturday 03 May 2025 00:42:11 +0000 (0:00:00.135) 0:00:19.535 ********** 2025-05-03 00:42:12.067503 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:12.068992 | orchestrator | 2025-05-03 00:42:12.069039 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-03 00:42:12.070130 | orchestrator | Saturday 03 May 2025 00:42:12 +0000 (0:00:00.133) 0:00:19.669 ********** 2025-05-03 00:42:12.202330 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:12.202793 | orchestrator | 2025-05-03 00:42:12.203545 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-03 00:42:12.204618 | orchestrator | Saturday 03 May 2025 00:42:12 +0000 (0:00:00.135) 0:00:19.805 ********** 2025-05-03 00:42:12.340456 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:12.341422 | orchestrator | 2025-05-03 00:42:12.341814 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-03 00:42:12.343177 | orchestrator | Saturday 03 May 2025 00:42:12 +0000 (0:00:00.137) 0:00:19.943 ********** 2025-05-03 00:42:12.473265 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:12.475522 | orchestrator | 2025-05-03 00:42:12.476062 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-03 00:42:12.476944 | orchestrator | Saturday 03 May 2025 00:42:12 +0000 (0:00:00.132) 0:00:20.076 ********** 2025-05-03 00:42:12.781521 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:12.782143 | orchestrator | 2025-05-03 00:42:12.783145 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-03 00:42:12.783780 | orchestrator | Saturday 03 May 2025 00:42:12 +0000 (0:00:00.308) 0:00:20.384 ********** 2025-05-03 00:42:12.925613 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:12.925806 | orchestrator | 2025-05-03 00:42:12.926698 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-03 00:42:12.927538 | orchestrator | Saturday 03 May 2025 00:42:12 +0000 (0:00:00.142) 0:00:20.527 ********** 2025-05-03 00:42:13.061217 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:13.061554 | orchestrator | 2025-05-03 00:42:13.062364 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-03 00:42:13.064462 | orchestrator | Saturday 03 May 2025 00:42:13 +0000 (0:00:00.136) 0:00:20.663 ********** 2025-05-03 00:42:13.202516 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:13.203537 | orchestrator | 2025-05-03 00:42:13.203607 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-03 00:42:13.203698 | orchestrator | Saturday 03 May 2025 00:42:13 +0000 (0:00:00.141) 0:00:20.805 ********** 2025-05-03 00:42:13.327699 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:13.328389 | orchestrator | 2025-05-03 00:42:13.329303 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-03 00:42:13.330162 | orchestrator | Saturday 03 May 2025 00:42:13 +0000 (0:00:00.125) 0:00:20.931 ********** 2025-05-03 00:42:13.455675 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:13.455874 | orchestrator | 2025-05-03 00:42:13.457481 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-03 00:42:13.457786 | orchestrator | Saturday 03 May 2025 00:42:13 +0000 (0:00:00.127) 0:00:21.058 ********** 2025-05-03 00:42:13.595291 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:13.595468 | orchestrator | 2025-05-03 00:42:13.596324 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-03 00:42:13.597174 | orchestrator | Saturday 03 May 2025 00:42:13 +0000 (0:00:00.139) 0:00:21.198 ********** 2025-05-03 00:42:13.736536 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:13.736805 | orchestrator | 2025-05-03 00:42:13.737304 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-03 00:42:13.738429 | orchestrator | Saturday 03 May 2025 00:42:13 +0000 (0:00:00.140) 0:00:21.339 ********** 2025-05-03 00:42:13.889361 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:13.889556 | orchestrator | 2025-05-03 00:42:13.889600 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-03 00:42:13.890353 | orchestrator | Saturday 03 May 2025 00:42:13 +0000 (0:00:00.148) 0:00:21.487 ********** 2025-05-03 00:42:14.032681 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:14.032933 | orchestrator | 2025-05-03 00:42:14.032969 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-03 00:42:14.033574 | orchestrator | Saturday 03 May 2025 00:42:14 +0000 (0:00:00.147) 0:00:21.635 ********** 2025-05-03 00:42:14.197404 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:14.200056 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:14.202181 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:14.371464 | orchestrator | 2025-05-03 00:42:14.371583 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-03 00:42:14.371602 | orchestrator | Saturday 03 May 2025 00:42:14 +0000 (0:00:00.162) 0:00:21.797 ********** 2025-05-03 00:42:14.371634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:14.373119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:14.373171 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:14.373996 | orchestrator | 2025-05-03 00:42:14.374716 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-03 00:42:14.375828 | orchestrator | Saturday 03 May 2025 00:42:14 +0000 (0:00:00.176) 0:00:21.973 ********** 2025-05-03 00:42:14.847223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:14.849161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:14.850205 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:14.850793 | orchestrator | 2025-05-03 00:42:14.851340 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-03 00:42:14.851808 | orchestrator | Saturday 03 May 2025 00:42:14 +0000 (0:00:00.473) 0:00:22.447 ********** 2025-05-03 00:42:15.022464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:15.023081 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:15.023490 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:15.024053 | orchestrator | 2025-05-03 00:42:15.024784 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-03 00:42:15.025122 | orchestrator | Saturday 03 May 2025 00:42:15 +0000 (0:00:00.178) 0:00:22.625 ********** 2025-05-03 00:42:15.214312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:15.214560 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:15.216317 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:15.216696 | orchestrator | 2025-05-03 00:42:15.217892 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-03 00:42:15.218052 | orchestrator | Saturday 03 May 2025 00:42:15 +0000 (0:00:00.187) 0:00:22.813 ********** 2025-05-03 00:42:15.376031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:15.376220 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:15.379005 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:15.380192 | orchestrator | 2025-05-03 00:42:15.380218 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-03 00:42:15.380238 | orchestrator | Saturday 03 May 2025 00:42:15 +0000 (0:00:00.164) 0:00:22.978 ********** 2025-05-03 00:42:15.551169 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:15.551418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:15.552737 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:15.553363 | orchestrator | 2025-05-03 00:42:15.553912 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-03 00:42:15.555096 | orchestrator | Saturday 03 May 2025 00:42:15 +0000 (0:00:00.176) 0:00:23.154 ********** 2025-05-03 00:42:15.755624 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:15.756021 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:15.756646 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:15.757340 | orchestrator | 2025-05-03 00:42:15.757752 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-03 00:42:15.758503 | orchestrator | Saturday 03 May 2025 00:42:15 +0000 (0:00:00.204) 0:00:23.358 ********** 2025-05-03 00:42:16.314401 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:42:16.314826 | orchestrator | 2025-05-03 00:42:16.315217 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-03 00:42:16.315801 | orchestrator | Saturday 03 May 2025 00:42:16 +0000 (0:00:00.552) 0:00:23.911 ********** 2025-05-03 00:42:16.813365 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:42:16.813987 | orchestrator | 2025-05-03 00:42:16.814381 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-03 00:42:16.815102 | orchestrator | Saturday 03 May 2025 00:42:16 +0000 (0:00:00.505) 0:00:24.416 ********** 2025-05-03 00:42:16.964290 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:42:16.964506 | orchestrator | 2025-05-03 00:42:16.967531 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-03 00:42:17.140664 | orchestrator | Saturday 03 May 2025 00:42:16 +0000 (0:00:00.149) 0:00:24.566 ********** 2025-05-03 00:42:17.140787 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'vg_name': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}) 2025-05-03 00:42:17.140923 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'vg_name': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'}) 2025-05-03 00:42:17.140946 | orchestrator | 2025-05-03 00:42:17.141692 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-03 00:42:17.144583 | orchestrator | Saturday 03 May 2025 00:42:17 +0000 (0:00:00.176) 0:00:24.743 ********** 2025-05-03 00:42:17.525098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:17.526612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:17.529566 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:17.529977 | orchestrator | 2025-05-03 00:42:17.531641 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-03 00:42:17.531973 | orchestrator | Saturday 03 May 2025 00:42:17 +0000 (0:00:00.385) 0:00:25.128 ********** 2025-05-03 00:42:17.719300 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:17.719945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:17.721866 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:17.723592 | orchestrator | 2025-05-03 00:42:17.724288 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-03 00:42:17.725104 | orchestrator | Saturday 03 May 2025 00:42:17 +0000 (0:00:00.193) 0:00:25.321 ********** 2025-05-03 00:42:17.885450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'})  2025-05-03 00:42:17.885687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'})  2025-05-03 00:42:17.886087 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:42:17.887121 | orchestrator | 2025-05-03 00:42:17.887675 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-03 00:42:17.888096 | orchestrator | Saturday 03 May 2025 00:42:17 +0000 (0:00:00.167) 0:00:25.489 ********** 2025-05-03 00:42:18.578402 | orchestrator | ok: [testbed-node-3] => { 2025-05-03 00:42:18.579481 | orchestrator |  "lvm_report": { 2025-05-03 00:42:18.580309 | orchestrator |  "lv": [ 2025-05-03 00:42:18.581376 | orchestrator |  { 2025-05-03 00:42:18.584148 | orchestrator |  "lv_name": "osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76", 2025-05-03 00:42:18.584937 | orchestrator |  "vg_name": "ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76" 2025-05-03 00:42:18.585952 | orchestrator |  }, 2025-05-03 00:42:18.587158 | orchestrator |  { 2025-05-03 00:42:18.587828 | orchestrator |  "lv_name": "osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a", 2025-05-03 00:42:18.588193 | orchestrator |  "vg_name": "ceph-eca5292b-8794-515a-ad73-b5efc7970d6a" 2025-05-03 00:42:18.590067 | orchestrator |  } 2025-05-03 00:42:18.591451 | orchestrator |  ], 2025-05-03 00:42:18.592395 | orchestrator |  "pv": [ 2025-05-03 00:42:18.593281 | orchestrator |  { 2025-05-03 00:42:18.594122 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-03 00:42:18.595110 | orchestrator |  "vg_name": "ceph-eca5292b-8794-515a-ad73-b5efc7970d6a" 2025-05-03 00:42:18.596012 | orchestrator |  }, 2025-05-03 00:42:18.596925 | orchestrator |  { 2025-05-03 00:42:18.598393 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-03 00:42:18.598817 | orchestrator |  "vg_name": "ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76" 2025-05-03 00:42:18.598879 | orchestrator |  } 2025-05-03 00:42:18.599603 | orchestrator |  ] 2025-05-03 00:42:18.600268 | orchestrator |  } 2025-05-03 00:42:18.600678 | orchestrator | } 2025-05-03 00:42:18.601296 | orchestrator | 2025-05-03 00:42:18.601814 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-03 00:42:18.602477 | orchestrator | 2025-05-03 00:42:18.602762 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-03 00:42:18.603560 | orchestrator | Saturday 03 May 2025 00:42:18 +0000 (0:00:00.690) 0:00:26.179 ********** 2025-05-03 00:42:19.180680 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-03 00:42:19.182803 | orchestrator | 2025-05-03 00:42:19.185036 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-03 00:42:19.185975 | orchestrator | Saturday 03 May 2025 00:42:19 +0000 (0:00:00.603) 0:00:26.782 ********** 2025-05-03 00:42:19.410408 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:19.410687 | orchestrator | 2025-05-03 00:42:19.413127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:19.414125 | orchestrator | Saturday 03 May 2025 00:42:19 +0000 (0:00:00.230) 0:00:27.013 ********** 2025-05-03 00:42:19.873988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-03 00:42:19.940591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-03 00:42:20.067921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-03 00:42:20.068046 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-03 00:42:20.068084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-03 00:42:20.068125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-03 00:42:20.068140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-03 00:42:20.068155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-03 00:42:20.068169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-03 00:42:20.068182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-03 00:42:20.068197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-03 00:42:20.068212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-03 00:42:20.068225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-03 00:42:20.068239 | orchestrator | 2025-05-03 00:42:20.068253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:20.068268 | orchestrator | Saturday 03 May 2025 00:42:19 +0000 (0:00:00.463) 0:00:27.477 ********** 2025-05-03 00:42:20.068299 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:20.069724 | orchestrator | 2025-05-03 00:42:20.070445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:20.071038 | orchestrator | Saturday 03 May 2025 00:42:20 +0000 (0:00:00.194) 0:00:27.671 ********** 2025-05-03 00:42:20.270563 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:20.271059 | orchestrator | 2025-05-03 00:42:20.271626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:20.272380 | orchestrator | Saturday 03 May 2025 00:42:20 +0000 (0:00:00.200) 0:00:27.872 ********** 2025-05-03 00:42:20.465030 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:20.465958 | orchestrator | 2025-05-03 00:42:20.466003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:20.466355 | orchestrator | Saturday 03 May 2025 00:42:20 +0000 (0:00:00.196) 0:00:28.068 ********** 2025-05-03 00:42:20.679784 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:20.680324 | orchestrator | 2025-05-03 00:42:20.681122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:20.682127 | orchestrator | Saturday 03 May 2025 00:42:20 +0000 (0:00:00.214) 0:00:28.283 ********** 2025-05-03 00:42:20.879671 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:20.879810 | orchestrator | 2025-05-03 00:42:20.882373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:21.074753 | orchestrator | Saturday 03 May 2025 00:42:20 +0000 (0:00:00.197) 0:00:28.481 ********** 2025-05-03 00:42:21.074927 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:21.075037 | orchestrator | 2025-05-03 00:42:21.076745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:21.077367 | orchestrator | Saturday 03 May 2025 00:42:21 +0000 (0:00:00.197) 0:00:28.678 ********** 2025-05-03 00:42:21.286490 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:21.287002 | orchestrator | 2025-05-03 00:42:21.287229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:21.289898 | orchestrator | Saturday 03 May 2025 00:42:21 +0000 (0:00:00.210) 0:00:28.889 ********** 2025-05-03 00:42:21.857245 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:21.857400 | orchestrator | 2025-05-03 00:42:21.858340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:21.859045 | orchestrator | Saturday 03 May 2025 00:42:21 +0000 (0:00:00.569) 0:00:29.458 ********** 2025-05-03 00:42:22.280312 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66) 2025-05-03 00:42:22.280927 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66) 2025-05-03 00:42:22.281457 | orchestrator | 2025-05-03 00:42:22.283436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:22.283622 | orchestrator | Saturday 03 May 2025 00:42:22 +0000 (0:00:00.423) 0:00:29.882 ********** 2025-05-03 00:42:22.710341 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8a5c9f8e-4062-4859-b774-db2eb35d9068) 2025-05-03 00:42:22.711184 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8a5c9f8e-4062-4859-b774-db2eb35d9068) 2025-05-03 00:42:22.712491 | orchestrator | 2025-05-03 00:42:22.714942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:23.121475 | orchestrator | Saturday 03 May 2025 00:42:22 +0000 (0:00:00.431) 0:00:30.313 ********** 2025-05-03 00:42:23.121631 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6a5303a2-e8ba-422b-a7dc-ef5d91cab650) 2025-05-03 00:42:23.122296 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6a5303a2-e8ba-422b-a7dc-ef5d91cab650) 2025-05-03 00:42:23.123306 | orchestrator | 2025-05-03 00:42:23.123659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:23.124087 | orchestrator | Saturday 03 May 2025 00:42:23 +0000 (0:00:00.410) 0:00:30.724 ********** 2025-05-03 00:42:23.575415 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_78b7c3f7-b361-43c7-bb55-097042834471) 2025-05-03 00:42:23.576234 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_78b7c3f7-b361-43c7-bb55-097042834471) 2025-05-03 00:42:23.577475 | orchestrator | 2025-05-03 00:42:23.578634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:23.579417 | orchestrator | Saturday 03 May 2025 00:42:23 +0000 (0:00:00.451) 0:00:31.175 ********** 2025-05-03 00:42:23.920110 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-03 00:42:23.920269 | orchestrator | 2025-05-03 00:42:23.920773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:23.921227 | orchestrator | Saturday 03 May 2025 00:42:23 +0000 (0:00:00.347) 0:00:31.523 ********** 2025-05-03 00:42:24.397457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-03 00:42:24.397701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-03 00:42:24.397727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-03 00:42:24.397749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-03 00:42:24.398540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-03 00:42:24.399289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-03 00:42:24.400244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-03 00:42:24.401192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-03 00:42:24.401808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-03 00:42:24.402339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-03 00:42:24.402898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-03 00:42:24.403634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-03 00:42:24.404398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-03 00:42:24.404987 | orchestrator | 2025-05-03 00:42:24.405509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:24.406097 | orchestrator | Saturday 03 May 2025 00:42:24 +0000 (0:00:00.472) 0:00:31.995 ********** 2025-05-03 00:42:24.598976 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:24.599338 | orchestrator | 2025-05-03 00:42:24.599371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:24.599395 | orchestrator | Saturday 03 May 2025 00:42:24 +0000 (0:00:00.207) 0:00:32.202 ********** 2025-05-03 00:42:24.813438 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:24.814246 | orchestrator | 2025-05-03 00:42:24.814284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:24.815444 | orchestrator | Saturday 03 May 2025 00:42:24 +0000 (0:00:00.213) 0:00:32.416 ********** 2025-05-03 00:42:25.385109 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:25.386401 | orchestrator | 2025-05-03 00:42:25.386616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:25.388203 | orchestrator | Saturday 03 May 2025 00:42:25 +0000 (0:00:00.570) 0:00:32.987 ********** 2025-05-03 00:42:25.603506 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:25.606251 | orchestrator | 2025-05-03 00:42:25.607250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:25.608347 | orchestrator | Saturday 03 May 2025 00:42:25 +0000 (0:00:00.217) 0:00:33.204 ********** 2025-05-03 00:42:25.823755 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:25.824008 | orchestrator | 2025-05-03 00:42:25.824397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:25.825267 | orchestrator | Saturday 03 May 2025 00:42:25 +0000 (0:00:00.222) 0:00:33.427 ********** 2025-05-03 00:42:26.027343 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:26.028100 | orchestrator | 2025-05-03 00:42:26.028954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:26.029483 | orchestrator | Saturday 03 May 2025 00:42:26 +0000 (0:00:00.202) 0:00:33.630 ********** 2025-05-03 00:42:26.217632 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:26.218088 | orchestrator | 2025-05-03 00:42:26.218582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:26.220517 | orchestrator | Saturday 03 May 2025 00:42:26 +0000 (0:00:00.189) 0:00:33.820 ********** 2025-05-03 00:42:26.412399 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:26.412679 | orchestrator | 2025-05-03 00:42:26.413467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:26.413970 | orchestrator | Saturday 03 May 2025 00:42:26 +0000 (0:00:00.194) 0:00:34.014 ********** 2025-05-03 00:42:27.059489 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-03 00:42:27.059771 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-03 00:42:27.061102 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-03 00:42:27.061693 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-03 00:42:27.061748 | orchestrator | 2025-05-03 00:42:27.063775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:27.063961 | orchestrator | Saturday 03 May 2025 00:42:27 +0000 (0:00:00.646) 0:00:34.661 ********** 2025-05-03 00:42:27.271252 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:27.271554 | orchestrator | 2025-05-03 00:42:27.272705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:27.273544 | orchestrator | Saturday 03 May 2025 00:42:27 +0000 (0:00:00.209) 0:00:34.870 ********** 2025-05-03 00:42:27.469021 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:27.469591 | orchestrator | 2025-05-03 00:42:27.472717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:27.665028 | orchestrator | Saturday 03 May 2025 00:42:27 +0000 (0:00:00.199) 0:00:35.069 ********** 2025-05-03 00:42:27.665176 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:27.665319 | orchestrator | 2025-05-03 00:42:27.666304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:27.666974 | orchestrator | Saturday 03 May 2025 00:42:27 +0000 (0:00:00.198) 0:00:35.268 ********** 2025-05-03 00:42:28.274426 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:28.275467 | orchestrator | 2025-05-03 00:42:28.277696 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-03 00:42:28.278296 | orchestrator | Saturday 03 May 2025 00:42:28 +0000 (0:00:00.607) 0:00:35.875 ********** 2025-05-03 00:42:28.413946 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:28.414227 | orchestrator | 2025-05-03 00:42:28.414499 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-03 00:42:28.415020 | orchestrator | Saturday 03 May 2025 00:42:28 +0000 (0:00:00.140) 0:00:36.016 ********** 2025-05-03 00:42:28.618129 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba494882-e80b-5600-bb3d-47da88e10312'}}) 2025-05-03 00:42:28.618340 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1900210e-f5cf-596b-8948-bbf6ca001e1a'}}) 2025-05-03 00:42:28.618369 | orchestrator | 2025-05-03 00:42:28.618702 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-03 00:42:28.619227 | orchestrator | Saturday 03 May 2025 00:42:28 +0000 (0:00:00.204) 0:00:36.221 ********** 2025-05-03 00:42:30.413232 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'}) 2025-05-03 00:42:30.413490 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'}) 2025-05-03 00:42:30.413913 | orchestrator | 2025-05-03 00:42:30.413963 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-03 00:42:30.414183 | orchestrator | Saturday 03 May 2025 00:42:30 +0000 (0:00:01.794) 0:00:38.015 ********** 2025-05-03 00:42:30.591962 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:30.592175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:30.593427 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:30.594708 | orchestrator | 2025-05-03 00:42:30.595741 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-03 00:42:30.596600 | orchestrator | Saturday 03 May 2025 00:42:30 +0000 (0:00:00.179) 0:00:38.195 ********** 2025-05-03 00:42:31.882908 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'}) 2025-05-03 00:42:31.883482 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'}) 2025-05-03 00:42:31.886099 | orchestrator | 2025-05-03 00:42:31.886325 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-03 00:42:31.887042 | orchestrator | Saturday 03 May 2025 00:42:31 +0000 (0:00:01.290) 0:00:39.485 ********** 2025-05-03 00:42:32.057319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:32.058220 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:32.058823 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:32.062265 | orchestrator | 2025-05-03 00:42:32.062481 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-03 00:42:32.062515 | orchestrator | Saturday 03 May 2025 00:42:32 +0000 (0:00:00.175) 0:00:39.660 ********** 2025-05-03 00:42:32.194343 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:32.194514 | orchestrator | 2025-05-03 00:42:32.195185 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-03 00:42:32.195614 | orchestrator | Saturday 03 May 2025 00:42:32 +0000 (0:00:00.137) 0:00:39.798 ********** 2025-05-03 00:42:32.345966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:32.346156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:32.346620 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:32.347164 | orchestrator | 2025-05-03 00:42:32.347920 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-03 00:42:32.348251 | orchestrator | Saturday 03 May 2025 00:42:32 +0000 (0:00:00.150) 0:00:39.948 ********** 2025-05-03 00:42:32.660672 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:32.660906 | orchestrator | 2025-05-03 00:42:32.661224 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-03 00:42:32.664695 | orchestrator | Saturday 03 May 2025 00:42:32 +0000 (0:00:00.314) 0:00:40.263 ********** 2025-05-03 00:42:32.812165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:32.812371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:32.815349 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:32.815478 | orchestrator | 2025-05-03 00:42:32.954694 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-03 00:42:32.954791 | orchestrator | Saturday 03 May 2025 00:42:32 +0000 (0:00:00.151) 0:00:40.415 ********** 2025-05-03 00:42:32.954822 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:32.955420 | orchestrator | 2025-05-03 00:42:32.956250 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-03 00:42:32.957321 | orchestrator | Saturday 03 May 2025 00:42:32 +0000 (0:00:00.142) 0:00:40.558 ********** 2025-05-03 00:42:33.128826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:33.129415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:33.130409 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:33.131306 | orchestrator | 2025-05-03 00:42:33.132043 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-03 00:42:33.132611 | orchestrator | Saturday 03 May 2025 00:42:33 +0000 (0:00:00.173) 0:00:40.732 ********** 2025-05-03 00:42:33.258117 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:33.258386 | orchestrator | 2025-05-03 00:42:33.260430 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-03 00:42:33.262476 | orchestrator | Saturday 03 May 2025 00:42:33 +0000 (0:00:00.129) 0:00:40.861 ********** 2025-05-03 00:42:33.417277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:33.417463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:33.418567 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:33.420865 | orchestrator | 2025-05-03 00:42:33.581271 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-03 00:42:33.581382 | orchestrator | Saturday 03 May 2025 00:42:33 +0000 (0:00:00.158) 0:00:41.020 ********** 2025-05-03 00:42:33.581416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:33.581722 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:33.582767 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:33.583629 | orchestrator | 2025-05-03 00:42:33.584226 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-03 00:42:33.584820 | orchestrator | Saturday 03 May 2025 00:42:33 +0000 (0:00:00.160) 0:00:41.180 ********** 2025-05-03 00:42:33.737194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:33.737474 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:33.738073 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:33.738573 | orchestrator | 2025-05-03 00:42:33.738666 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-03 00:42:33.739182 | orchestrator | Saturday 03 May 2025 00:42:33 +0000 (0:00:00.158) 0:00:41.338 ********** 2025-05-03 00:42:33.887745 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:33.887977 | orchestrator | 2025-05-03 00:42:33.888737 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-03 00:42:33.889910 | orchestrator | Saturday 03 May 2025 00:42:33 +0000 (0:00:00.152) 0:00:41.491 ********** 2025-05-03 00:42:34.023302 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:34.023981 | orchestrator | 2025-05-03 00:42:34.024055 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-03 00:42:34.025048 | orchestrator | Saturday 03 May 2025 00:42:34 +0000 (0:00:00.134) 0:00:41.626 ********** 2025-05-03 00:42:34.165531 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:34.165792 | orchestrator | 2025-05-03 00:42:34.167895 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-03 00:42:34.168087 | orchestrator | Saturday 03 May 2025 00:42:34 +0000 (0:00:00.142) 0:00:41.768 ********** 2025-05-03 00:42:34.318501 | orchestrator | ok: [testbed-node-4] => { 2025-05-03 00:42:34.318964 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-03 00:42:34.319641 | orchestrator | } 2025-05-03 00:42:34.320216 | orchestrator | 2025-05-03 00:42:34.320784 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-03 00:42:34.321468 | orchestrator | Saturday 03 May 2025 00:42:34 +0000 (0:00:00.153) 0:00:41.921 ********** 2025-05-03 00:42:34.658900 | orchestrator | ok: [testbed-node-4] => { 2025-05-03 00:42:34.659208 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-03 00:42:34.660393 | orchestrator | } 2025-05-03 00:42:34.660429 | orchestrator | 2025-05-03 00:42:34.802435 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-03 00:42:34.802583 | orchestrator | Saturday 03 May 2025 00:42:34 +0000 (0:00:00.338) 0:00:42.260 ********** 2025-05-03 00:42:34.802620 | orchestrator | ok: [testbed-node-4] => { 2025-05-03 00:42:34.803485 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-03 00:42:34.804373 | orchestrator | } 2025-05-03 00:42:34.805391 | orchestrator | 2025-05-03 00:42:34.805799 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-03 00:42:34.806337 | orchestrator | Saturday 03 May 2025 00:42:34 +0000 (0:00:00.145) 0:00:42.406 ********** 2025-05-03 00:42:35.274753 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:35.275057 | orchestrator | 2025-05-03 00:42:35.275800 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-03 00:42:35.276274 | orchestrator | Saturday 03 May 2025 00:42:35 +0000 (0:00:00.472) 0:00:42.878 ********** 2025-05-03 00:42:35.751735 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:35.751997 | orchestrator | 2025-05-03 00:42:35.754527 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-03 00:42:36.247692 | orchestrator | Saturday 03 May 2025 00:42:35 +0000 (0:00:00.474) 0:00:43.352 ********** 2025-05-03 00:42:36.247900 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:36.248704 | orchestrator | 2025-05-03 00:42:36.249269 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-03 00:42:36.251921 | orchestrator | Saturday 03 May 2025 00:42:36 +0000 (0:00:00.498) 0:00:43.850 ********** 2025-05-03 00:42:36.393311 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:36.394387 | orchestrator | 2025-05-03 00:42:36.395277 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-03 00:42:36.396274 | orchestrator | Saturday 03 May 2025 00:42:36 +0000 (0:00:00.145) 0:00:43.996 ********** 2025-05-03 00:42:36.506502 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:36.507275 | orchestrator | 2025-05-03 00:42:36.508071 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-03 00:42:36.508485 | orchestrator | Saturday 03 May 2025 00:42:36 +0000 (0:00:00.113) 0:00:44.110 ********** 2025-05-03 00:42:36.627928 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:36.628757 | orchestrator | 2025-05-03 00:42:36.628798 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-03 00:42:36.628823 | orchestrator | Saturday 03 May 2025 00:42:36 +0000 (0:00:00.116) 0:00:44.226 ********** 2025-05-03 00:42:36.781309 | orchestrator | ok: [testbed-node-4] => { 2025-05-03 00:42:36.781743 | orchestrator |  "vgs_report": { 2025-05-03 00:42:36.782772 | orchestrator |  "vg": [] 2025-05-03 00:42:36.783897 | orchestrator |  } 2025-05-03 00:42:36.784777 | orchestrator | } 2025-05-03 00:42:36.785123 | orchestrator | 2025-05-03 00:42:36.785616 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-03 00:42:36.786262 | orchestrator | Saturday 03 May 2025 00:42:36 +0000 (0:00:00.157) 0:00:44.383 ********** 2025-05-03 00:42:36.917329 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:36.917817 | orchestrator | 2025-05-03 00:42:36.918530 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-03 00:42:36.918902 | orchestrator | Saturday 03 May 2025 00:42:36 +0000 (0:00:00.136) 0:00:44.520 ********** 2025-05-03 00:42:37.240542 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:37.241373 | orchestrator | 2025-05-03 00:42:37.241978 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-03 00:42:37.242720 | orchestrator | Saturday 03 May 2025 00:42:37 +0000 (0:00:00.323) 0:00:44.844 ********** 2025-05-03 00:42:37.377082 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:37.377262 | orchestrator | 2025-05-03 00:42:37.378740 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-03 00:42:37.379433 | orchestrator | Saturday 03 May 2025 00:42:37 +0000 (0:00:00.136) 0:00:44.980 ********** 2025-05-03 00:42:37.526223 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:37.527626 | orchestrator | 2025-05-03 00:42:37.527666 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-03 00:42:37.528376 | orchestrator | Saturday 03 May 2025 00:42:37 +0000 (0:00:00.148) 0:00:45.128 ********** 2025-05-03 00:42:37.667670 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:37.668672 | orchestrator | 2025-05-03 00:42:37.672691 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-03 00:42:37.816182 | orchestrator | Saturday 03 May 2025 00:42:37 +0000 (0:00:00.141) 0:00:45.269 ********** 2025-05-03 00:42:37.816331 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:37.817229 | orchestrator | 2025-05-03 00:42:37.818825 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-03 00:42:37.820222 | orchestrator | Saturday 03 May 2025 00:42:37 +0000 (0:00:00.150) 0:00:45.420 ********** 2025-05-03 00:42:37.957184 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:37.958261 | orchestrator | 2025-05-03 00:42:37.958552 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-03 00:42:37.959697 | orchestrator | Saturday 03 May 2025 00:42:37 +0000 (0:00:00.139) 0:00:45.560 ********** 2025-05-03 00:42:38.097180 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:38.098426 | orchestrator | 2025-05-03 00:42:38.099286 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-03 00:42:38.101668 | orchestrator | Saturday 03 May 2025 00:42:38 +0000 (0:00:00.140) 0:00:45.700 ********** 2025-05-03 00:42:38.259639 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:38.259831 | orchestrator | 2025-05-03 00:42:38.259916 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-03 00:42:38.259939 | orchestrator | Saturday 03 May 2025 00:42:38 +0000 (0:00:00.157) 0:00:45.858 ********** 2025-05-03 00:42:38.397038 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:38.399621 | orchestrator | 2025-05-03 00:42:38.399749 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-03 00:42:38.401432 | orchestrator | Saturday 03 May 2025 00:42:38 +0000 (0:00:00.141) 0:00:45.999 ********** 2025-05-03 00:42:38.539970 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:38.542691 | orchestrator | 2025-05-03 00:42:38.544582 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-03 00:42:38.688216 | orchestrator | Saturday 03 May 2025 00:42:38 +0000 (0:00:00.144) 0:00:46.143 ********** 2025-05-03 00:42:38.688347 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:38.692078 | orchestrator | 2025-05-03 00:42:38.695782 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-03 00:42:38.696322 | orchestrator | Saturday 03 May 2025 00:42:38 +0000 (0:00:00.146) 0:00:46.290 ********** 2025-05-03 00:42:38.829196 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:38.830310 | orchestrator | 2025-05-03 00:42:38.831041 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-03 00:42:38.834717 | orchestrator | Saturday 03 May 2025 00:42:38 +0000 (0:00:00.142) 0:00:46.432 ********** 2025-05-03 00:42:38.969350 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:38.970462 | orchestrator | 2025-05-03 00:42:38.971615 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-03 00:42:38.974867 | orchestrator | Saturday 03 May 2025 00:42:38 +0000 (0:00:00.140) 0:00:46.572 ********** 2025-05-03 00:42:39.361495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:39.362648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:39.363919 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:39.363959 | orchestrator | 2025-05-03 00:42:39.368280 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-03 00:42:39.369368 | orchestrator | Saturday 03 May 2025 00:42:39 +0000 (0:00:00.391) 0:00:46.964 ********** 2025-05-03 00:42:39.523733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:39.524460 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:39.525236 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:39.529352 | orchestrator | 2025-05-03 00:42:39.529467 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-03 00:42:39.530352 | orchestrator | Saturday 03 May 2025 00:42:39 +0000 (0:00:00.162) 0:00:47.127 ********** 2025-05-03 00:42:39.693992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:39.698570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:39.699507 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:39.700013 | orchestrator | 2025-05-03 00:42:39.700389 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-03 00:42:39.703517 | orchestrator | Saturday 03 May 2025 00:42:39 +0000 (0:00:00.168) 0:00:47.296 ********** 2025-05-03 00:42:39.857130 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:39.858133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:39.859309 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:39.860826 | orchestrator | 2025-05-03 00:42:39.861199 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-03 00:42:39.861880 | orchestrator | Saturday 03 May 2025 00:42:39 +0000 (0:00:00.164) 0:00:47.460 ********** 2025-05-03 00:42:40.029173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:40.029890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:40.030518 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:40.031201 | orchestrator | 2025-05-03 00:42:40.034096 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-03 00:42:40.034209 | orchestrator | Saturday 03 May 2025 00:42:40 +0000 (0:00:00.172) 0:00:47.632 ********** 2025-05-03 00:42:40.191829 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:40.197647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:40.197732 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:40.198335 | orchestrator | 2025-05-03 00:42:40.198932 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-03 00:42:40.199461 | orchestrator | Saturday 03 May 2025 00:42:40 +0000 (0:00:00.157) 0:00:47.790 ********** 2025-05-03 00:42:40.351467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:40.353324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:40.353387 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:40.353478 | orchestrator | 2025-05-03 00:42:40.353504 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-03 00:42:40.353524 | orchestrator | Saturday 03 May 2025 00:42:40 +0000 (0:00:00.164) 0:00:47.954 ********** 2025-05-03 00:42:40.514773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:40.516651 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:40.518469 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:40.519385 | orchestrator | 2025-05-03 00:42:40.519800 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-03 00:42:40.520418 | orchestrator | Saturday 03 May 2025 00:42:40 +0000 (0:00:00.162) 0:00:48.117 ********** 2025-05-03 00:42:41.004733 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:41.006274 | orchestrator | 2025-05-03 00:42:41.006572 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-03 00:42:41.006608 | orchestrator | Saturday 03 May 2025 00:42:40 +0000 (0:00:00.489) 0:00:48.606 ********** 2025-05-03 00:42:41.502279 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:41.503015 | orchestrator | 2025-05-03 00:42:41.503414 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-03 00:42:41.505157 | orchestrator | Saturday 03 May 2025 00:42:41 +0000 (0:00:00.498) 0:00:49.105 ********** 2025-05-03 00:42:41.851592 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:42:41.851770 | orchestrator | 2025-05-03 00:42:41.852626 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-03 00:42:41.853338 | orchestrator | Saturday 03 May 2025 00:42:41 +0000 (0:00:00.348) 0:00:49.453 ********** 2025-05-03 00:42:42.049739 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'vg_name': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'}) 2025-05-03 00:42:42.051395 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'vg_name': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'}) 2025-05-03 00:42:42.055373 | orchestrator | 2025-05-03 00:42:42.217486 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-03 00:42:42.217596 | orchestrator | Saturday 03 May 2025 00:42:42 +0000 (0:00:00.196) 0:00:49.650 ********** 2025-05-03 00:42:42.217631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:42.218285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:42.219574 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:42.220576 | orchestrator | 2025-05-03 00:42:42.224357 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-03 00:42:42.387744 | orchestrator | Saturday 03 May 2025 00:42:42 +0000 (0:00:00.171) 0:00:49.821 ********** 2025-05-03 00:42:42.387914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:42.388966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:42.389920 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:42.396412 | orchestrator | 2025-05-03 00:42:42.559478 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-03 00:42:42.559582 | orchestrator | Saturday 03 May 2025 00:42:42 +0000 (0:00:00.170) 0:00:49.991 ********** 2025-05-03 00:42:42.559618 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'})  2025-05-03 00:42:42.562384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'})  2025-05-03 00:42:42.563071 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:42:42.564162 | orchestrator | 2025-05-03 00:42:42.565110 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-03 00:42:42.567336 | orchestrator | Saturday 03 May 2025 00:42:42 +0000 (0:00:00.170) 0:00:50.162 ********** 2025-05-03 00:42:43.413886 | orchestrator | ok: [testbed-node-4] => { 2025-05-03 00:42:43.415714 | orchestrator |  "lvm_report": { 2025-05-03 00:42:43.415790 | orchestrator |  "lv": [ 2025-05-03 00:42:43.417404 | orchestrator |  { 2025-05-03 00:42:43.419250 | orchestrator |  "lv_name": "osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a", 2025-05-03 00:42:43.420424 | orchestrator |  "vg_name": "ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a" 2025-05-03 00:42:43.421660 | orchestrator |  }, 2025-05-03 00:42:43.422462 | orchestrator |  { 2025-05-03 00:42:43.423651 | orchestrator |  "lv_name": "osd-block-ba494882-e80b-5600-bb3d-47da88e10312", 2025-05-03 00:42:43.424801 | orchestrator |  "vg_name": "ceph-ba494882-e80b-5600-bb3d-47da88e10312" 2025-05-03 00:42:43.425582 | orchestrator |  } 2025-05-03 00:42:43.426347 | orchestrator |  ], 2025-05-03 00:42:43.427207 | orchestrator |  "pv": [ 2025-05-03 00:42:43.428287 | orchestrator |  { 2025-05-03 00:42:43.429030 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-03 00:42:43.429910 | orchestrator |  "vg_name": "ceph-ba494882-e80b-5600-bb3d-47da88e10312" 2025-05-03 00:42:43.430765 | orchestrator |  }, 2025-05-03 00:42:43.431757 | orchestrator |  { 2025-05-03 00:42:43.432430 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-03 00:42:43.433478 | orchestrator |  "vg_name": "ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a" 2025-05-03 00:42:43.434118 | orchestrator |  } 2025-05-03 00:42:43.435212 | orchestrator |  ] 2025-05-03 00:42:43.435680 | orchestrator |  } 2025-05-03 00:42:43.436458 | orchestrator | } 2025-05-03 00:42:43.437065 | orchestrator | 2025-05-03 00:42:43.437554 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-03 00:42:43.438339 | orchestrator | 2025-05-03 00:42:43.439234 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-03 00:42:43.439606 | orchestrator | Saturday 03 May 2025 00:42:43 +0000 (0:00:00.854) 0:00:51.016 ********** 2025-05-03 00:42:43.657965 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-03 00:42:43.659637 | orchestrator | 2025-05-03 00:42:43.660740 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-03 00:42:43.661752 | orchestrator | Saturday 03 May 2025 00:42:43 +0000 (0:00:00.243) 0:00:51.260 ********** 2025-05-03 00:42:43.893757 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:42:44.333984 | orchestrator | 2025-05-03 00:42:44.334146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:44.334168 | orchestrator | Saturday 03 May 2025 00:42:43 +0000 (0:00:00.234) 0:00:51.495 ********** 2025-05-03 00:42:44.334201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-03 00:42:44.335321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-03 00:42:44.336670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-03 00:42:44.337853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-03 00:42:44.339269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-03 00:42:44.340540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-03 00:42:44.341784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-03 00:42:44.342637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-03 00:42:44.344391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-03 00:42:44.345397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-03 00:42:44.346746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-03 00:42:44.347607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-03 00:42:44.348368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-03 00:42:44.349294 | orchestrator | 2025-05-03 00:42:44.350291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:44.351379 | orchestrator | Saturday 03 May 2025 00:42:44 +0000 (0:00:00.442) 0:00:51.938 ********** 2025-05-03 00:42:44.553770 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:44.555762 | orchestrator | 2025-05-03 00:42:44.556290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:44.557700 | orchestrator | Saturday 03 May 2025 00:42:44 +0000 (0:00:00.219) 0:00:52.157 ********** 2025-05-03 00:42:44.773229 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:44.774694 | orchestrator | 2025-05-03 00:42:44.776722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:44.777110 | orchestrator | Saturday 03 May 2025 00:42:44 +0000 (0:00:00.218) 0:00:52.375 ********** 2025-05-03 00:42:44.975826 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:44.977074 | orchestrator | 2025-05-03 00:42:44.979300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:44.980556 | orchestrator | Saturday 03 May 2025 00:42:44 +0000 (0:00:00.203) 0:00:52.578 ********** 2025-05-03 00:42:45.182128 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:45.182702 | orchestrator | 2025-05-03 00:42:45.184949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:45.185825 | orchestrator | Saturday 03 May 2025 00:42:45 +0000 (0:00:00.206) 0:00:52.785 ********** 2025-05-03 00:42:45.373100 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:45.373492 | orchestrator | 2025-05-03 00:42:45.375028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:45.376122 | orchestrator | Saturday 03 May 2025 00:42:45 +0000 (0:00:00.191) 0:00:52.976 ********** 2025-05-03 00:42:45.973156 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:45.974572 | orchestrator | 2025-05-03 00:42:45.978186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:45.979118 | orchestrator | Saturday 03 May 2025 00:42:45 +0000 (0:00:00.598) 0:00:53.575 ********** 2025-05-03 00:42:46.198348 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:46.199129 | orchestrator | 2025-05-03 00:42:46.199993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:46.203505 | orchestrator | Saturday 03 May 2025 00:42:46 +0000 (0:00:00.226) 0:00:53.802 ********** 2025-05-03 00:42:46.398308 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:46.399408 | orchestrator | 2025-05-03 00:42:46.403483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:46.405245 | orchestrator | Saturday 03 May 2025 00:42:46 +0000 (0:00:00.199) 0:00:54.001 ********** 2025-05-03 00:42:46.836689 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c) 2025-05-03 00:42:46.838225 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c) 2025-05-03 00:42:46.838307 | orchestrator | 2025-05-03 00:42:47.322763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:47.322923 | orchestrator | Saturday 03 May 2025 00:42:46 +0000 (0:00:00.438) 0:00:54.439 ********** 2025-05-03 00:42:47.322962 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_626178b7-dd78-4872-a9d6-22f12232405d) 2025-05-03 00:42:47.323658 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_626178b7-dd78-4872-a9d6-22f12232405d) 2025-05-03 00:42:47.324875 | orchestrator | 2025-05-03 00:42:47.326107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:47.327045 | orchestrator | Saturday 03 May 2025 00:42:47 +0000 (0:00:00.486) 0:00:54.926 ********** 2025-05-03 00:42:47.762273 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cf08c0e7-08ad-4d2d-8710-ce05fc114cf2) 2025-05-03 00:42:47.762777 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cf08c0e7-08ad-4d2d-8710-ce05fc114cf2) 2025-05-03 00:42:47.764995 | orchestrator | 2025-05-03 00:42:47.765820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:47.767651 | orchestrator | Saturday 03 May 2025 00:42:47 +0000 (0:00:00.437) 0:00:55.363 ********** 2025-05-03 00:42:48.171461 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_592c23d3-c323-4834-ad18-db1726824a9d) 2025-05-03 00:42:48.173015 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_592c23d3-c323-4834-ad18-db1726824a9d) 2025-05-03 00:42:48.175385 | orchestrator | 2025-05-03 00:42:48.178121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-03 00:42:48.509149 | orchestrator | Saturday 03 May 2025 00:42:48 +0000 (0:00:00.410) 0:00:55.774 ********** 2025-05-03 00:42:48.509279 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-03 00:42:48.510249 | orchestrator | 2025-05-03 00:42:48.511206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:48.513120 | orchestrator | Saturday 03 May 2025 00:42:48 +0000 (0:00:00.338) 0:00:56.112 ********** 2025-05-03 00:42:48.974171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-03 00:42:48.976668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-03 00:42:48.977718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-03 00:42:48.979347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-03 00:42:48.979773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-03 00:42:48.980485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-03 00:42:48.981280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-03 00:42:48.981925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-03 00:42:48.982448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-03 00:42:48.982781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-03 00:42:48.983456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-03 00:42:48.983761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-03 00:42:48.984213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-03 00:42:48.984672 | orchestrator | 2025-05-03 00:42:48.985368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:48.985476 | orchestrator | Saturday 03 May 2025 00:42:48 +0000 (0:00:00.463) 0:00:56.576 ********** 2025-05-03 00:42:49.548502 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:49.548675 | orchestrator | 2025-05-03 00:42:49.548954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:49.549346 | orchestrator | Saturday 03 May 2025 00:42:49 +0000 (0:00:00.576) 0:00:57.152 ********** 2025-05-03 00:42:49.756171 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:49.756819 | orchestrator | 2025-05-03 00:42:49.757740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:49.758139 | orchestrator | Saturday 03 May 2025 00:42:49 +0000 (0:00:00.207) 0:00:57.359 ********** 2025-05-03 00:42:49.969394 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:49.970149 | orchestrator | 2025-05-03 00:42:49.970283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:49.971733 | orchestrator | Saturday 03 May 2025 00:42:49 +0000 (0:00:00.213) 0:00:57.573 ********** 2025-05-03 00:42:50.166702 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:50.168242 | orchestrator | 2025-05-03 00:42:50.169409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:50.170113 | orchestrator | Saturday 03 May 2025 00:42:50 +0000 (0:00:00.197) 0:00:57.770 ********** 2025-05-03 00:42:50.378352 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:50.378610 | orchestrator | 2025-05-03 00:42:50.379994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:50.381427 | orchestrator | Saturday 03 May 2025 00:42:50 +0000 (0:00:00.210) 0:00:57.981 ********** 2025-05-03 00:42:50.585319 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:50.586245 | orchestrator | 2025-05-03 00:42:50.586293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:50.587635 | orchestrator | Saturday 03 May 2025 00:42:50 +0000 (0:00:00.206) 0:00:58.187 ********** 2025-05-03 00:42:50.796240 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:50.796772 | orchestrator | 2025-05-03 00:42:50.798580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:50.799554 | orchestrator | Saturday 03 May 2025 00:42:50 +0000 (0:00:00.212) 0:00:58.400 ********** 2025-05-03 00:42:51.000613 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:51.000993 | orchestrator | 2025-05-03 00:42:51.001906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:51.002746 | orchestrator | Saturday 03 May 2025 00:42:50 +0000 (0:00:00.203) 0:00:58.604 ********** 2025-05-03 00:42:51.837206 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-03 00:42:51.837794 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-03 00:42:51.838645 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-03 00:42:51.839480 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-03 00:42:51.841629 | orchestrator | 2025-05-03 00:42:51.841980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:51.842064 | orchestrator | Saturday 03 May 2025 00:42:51 +0000 (0:00:00.837) 0:00:59.441 ********** 2025-05-03 00:42:52.032169 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:52.033701 | orchestrator | 2025-05-03 00:42:52.034307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:52.034533 | orchestrator | Saturday 03 May 2025 00:42:52 +0000 (0:00:00.194) 0:00:59.635 ********** 2025-05-03 00:42:52.698115 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:52.698551 | orchestrator | 2025-05-03 00:42:52.699259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:52.699926 | orchestrator | Saturday 03 May 2025 00:42:52 +0000 (0:00:00.666) 0:01:00.302 ********** 2025-05-03 00:42:52.904176 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:52.904483 | orchestrator | 2025-05-03 00:42:52.905397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-03 00:42:52.906377 | orchestrator | Saturday 03 May 2025 00:42:52 +0000 (0:00:00.204) 0:01:00.506 ********** 2025-05-03 00:42:53.102921 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:53.103068 | orchestrator | 2025-05-03 00:42:53.104088 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-03 00:42:53.105213 | orchestrator | Saturday 03 May 2025 00:42:53 +0000 (0:00:00.199) 0:01:00.706 ********** 2025-05-03 00:42:53.242264 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:53.242409 | orchestrator | 2025-05-03 00:42:53.243361 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-03 00:42:53.244214 | orchestrator | Saturday 03 May 2025 00:42:53 +0000 (0:00:00.139) 0:01:00.846 ********** 2025-05-03 00:42:53.448219 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '63c4e6bd-963b-5ec8-a8d0-e52c79716553'}}) 2025-05-03 00:42:53.449404 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f0db6d06-6fa6-557d-977f-52f0cf84ead8'}}) 2025-05-03 00:42:53.449446 | orchestrator | 2025-05-03 00:42:53.450520 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-03 00:42:53.451623 | orchestrator | Saturday 03 May 2025 00:42:53 +0000 (0:00:00.205) 0:01:01.051 ********** 2025-05-03 00:42:55.298377 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'}) 2025-05-03 00:42:55.298539 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'}) 2025-05-03 00:42:55.298779 | orchestrator | 2025-05-03 00:42:55.299496 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-03 00:42:55.300437 | orchestrator | Saturday 03 May 2025 00:42:55 +0000 (0:00:01.849) 0:01:02.900 ********** 2025-05-03 00:42:55.470727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:42:55.471085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:42:55.471592 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:55.472326 | orchestrator | 2025-05-03 00:42:55.472900 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-03 00:42:55.473588 | orchestrator | Saturday 03 May 2025 00:42:55 +0000 (0:00:00.173) 0:01:03.074 ********** 2025-05-03 00:42:56.762174 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'}) 2025-05-03 00:42:56.762536 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'}) 2025-05-03 00:42:56.763697 | orchestrator | 2025-05-03 00:42:56.763923 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-03 00:42:56.764793 | orchestrator | Saturday 03 May 2025 00:42:56 +0000 (0:00:01.289) 0:01:04.364 ********** 2025-05-03 00:42:56.921389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:42:56.921595 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:42:56.921826 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:56.922763 | orchestrator | 2025-05-03 00:42:56.928359 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-03 00:42:56.928694 | orchestrator | Saturday 03 May 2025 00:42:56 +0000 (0:00:00.160) 0:01:04.525 ********** 2025-05-03 00:42:57.230149 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:57.230588 | orchestrator | 2025-05-03 00:42:57.231693 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-03 00:42:57.232480 | orchestrator | Saturday 03 May 2025 00:42:57 +0000 (0:00:00.308) 0:01:04.833 ********** 2025-05-03 00:42:57.402492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:42:57.403100 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:42:57.404272 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:57.405012 | orchestrator | 2025-05-03 00:42:57.406065 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-03 00:42:57.406591 | orchestrator | Saturday 03 May 2025 00:42:57 +0000 (0:00:00.170) 0:01:05.004 ********** 2025-05-03 00:42:57.554120 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:57.554330 | orchestrator | 2025-05-03 00:42:57.554621 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-03 00:42:57.554864 | orchestrator | Saturday 03 May 2025 00:42:57 +0000 (0:00:00.152) 0:01:05.157 ********** 2025-05-03 00:42:57.728892 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:42:57.729092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:42:57.730129 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:57.730928 | orchestrator | 2025-05-03 00:42:57.732165 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-03 00:42:57.732864 | orchestrator | Saturday 03 May 2025 00:42:57 +0000 (0:00:00.173) 0:01:05.331 ********** 2025-05-03 00:42:57.862297 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:57.862456 | orchestrator | 2025-05-03 00:42:57.863337 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-03 00:42:57.864552 | orchestrator | Saturday 03 May 2025 00:42:57 +0000 (0:00:00.133) 0:01:05.464 ********** 2025-05-03 00:42:58.022354 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:42:58.022533 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:42:58.024038 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:58.025277 | orchestrator | 2025-05-03 00:42:58.025816 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-03 00:42:58.026602 | orchestrator | Saturday 03 May 2025 00:42:58 +0000 (0:00:00.161) 0:01:05.625 ********** 2025-05-03 00:42:58.170908 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:42:58.171397 | orchestrator | 2025-05-03 00:42:58.172192 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-03 00:42:58.172691 | orchestrator | Saturday 03 May 2025 00:42:58 +0000 (0:00:00.149) 0:01:05.774 ********** 2025-05-03 00:42:58.348101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:42:58.348309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:42:58.349981 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:58.350908 | orchestrator | 2025-05-03 00:42:58.351390 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-03 00:42:58.352366 | orchestrator | Saturday 03 May 2025 00:42:58 +0000 (0:00:00.177) 0:01:05.952 ********** 2025-05-03 00:42:58.514982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:42:58.515501 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:42:58.515545 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:58.516360 | orchestrator | 2025-05-03 00:42:58.517413 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-03 00:42:58.518151 | orchestrator | Saturday 03 May 2025 00:42:58 +0000 (0:00:00.166) 0:01:06.118 ********** 2025-05-03 00:42:58.675006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:42:58.675200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:42:58.675923 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:58.676743 | orchestrator | 2025-05-03 00:42:58.682280 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-03 00:42:58.811164 | orchestrator | Saturday 03 May 2025 00:42:58 +0000 (0:00:00.160) 0:01:06.278 ********** 2025-05-03 00:42:58.811328 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:58.811430 | orchestrator | 2025-05-03 00:42:58.812043 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-03 00:42:58.812311 | orchestrator | Saturday 03 May 2025 00:42:58 +0000 (0:00:00.137) 0:01:06.415 ********** 2025-05-03 00:42:59.154331 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:59.155316 | orchestrator | 2025-05-03 00:42:59.155435 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-03 00:42:59.155963 | orchestrator | Saturday 03 May 2025 00:42:59 +0000 (0:00:00.342) 0:01:06.757 ********** 2025-05-03 00:42:59.294719 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:42:59.295355 | orchestrator | 2025-05-03 00:42:59.298443 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-03 00:42:59.299537 | orchestrator | Saturday 03 May 2025 00:42:59 +0000 (0:00:00.140) 0:01:06.898 ********** 2025-05-03 00:42:59.451885 | orchestrator | ok: [testbed-node-5] => { 2025-05-03 00:42:59.452985 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-03 00:42:59.453034 | orchestrator | } 2025-05-03 00:42:59.455702 | orchestrator | 2025-05-03 00:42:59.456001 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-03 00:42:59.456641 | orchestrator | Saturday 03 May 2025 00:42:59 +0000 (0:00:00.155) 0:01:07.054 ********** 2025-05-03 00:42:59.597182 | orchestrator | ok: [testbed-node-5] => { 2025-05-03 00:42:59.598217 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-03 00:42:59.598262 | orchestrator | } 2025-05-03 00:42:59.599012 | orchestrator | 2025-05-03 00:42:59.599822 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-03 00:42:59.602280 | orchestrator | Saturday 03 May 2025 00:42:59 +0000 (0:00:00.146) 0:01:07.200 ********** 2025-05-03 00:42:59.761163 | orchestrator | ok: [testbed-node-5] => { 2025-05-03 00:42:59.761564 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-03 00:42:59.762056 | orchestrator | } 2025-05-03 00:42:59.762663 | orchestrator | 2025-05-03 00:42:59.763151 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-03 00:42:59.763623 | orchestrator | Saturday 03 May 2025 00:42:59 +0000 (0:00:00.163) 0:01:07.364 ********** 2025-05-03 00:43:00.285771 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:00.286454 | orchestrator | 2025-05-03 00:43:00.287076 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-03 00:43:00.287701 | orchestrator | Saturday 03 May 2025 00:43:00 +0000 (0:00:00.525) 0:01:07.889 ********** 2025-05-03 00:43:00.820352 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:00.820538 | orchestrator | 2025-05-03 00:43:00.823259 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-03 00:43:00.823417 | orchestrator | Saturday 03 May 2025 00:43:00 +0000 (0:00:00.532) 0:01:08.422 ********** 2025-05-03 00:43:01.321919 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:01.322153 | orchestrator | 2025-05-03 00:43:01.322610 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-03 00:43:01.323204 | orchestrator | Saturday 03 May 2025 00:43:01 +0000 (0:00:00.503) 0:01:08.925 ********** 2025-05-03 00:43:01.471526 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:01.473982 | orchestrator | 2025-05-03 00:43:01.577490 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-03 00:43:01.577609 | orchestrator | Saturday 03 May 2025 00:43:01 +0000 (0:00:00.147) 0:01:09.072 ********** 2025-05-03 00:43:01.577641 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:01.578087 | orchestrator | 2025-05-03 00:43:01.579051 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-03 00:43:01.579621 | orchestrator | Saturday 03 May 2025 00:43:01 +0000 (0:00:00.108) 0:01:09.181 ********** 2025-05-03 00:43:01.677002 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:01.987558 | orchestrator | 2025-05-03 00:43:01.987671 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-03 00:43:01.987692 | orchestrator | Saturday 03 May 2025 00:43:01 +0000 (0:00:00.097) 0:01:09.279 ********** 2025-05-03 00:43:01.987726 | orchestrator | ok: [testbed-node-5] => { 2025-05-03 00:43:01.988233 | orchestrator |  "vgs_report": { 2025-05-03 00:43:01.989391 | orchestrator |  "vg": [] 2025-05-03 00:43:01.990395 | orchestrator |  } 2025-05-03 00:43:01.993138 | orchestrator | } 2025-05-03 00:43:01.993208 | orchestrator | 2025-05-03 00:43:01.993229 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-03 00:43:01.994068 | orchestrator | Saturday 03 May 2025 00:43:01 +0000 (0:00:00.312) 0:01:09.591 ********** 2025-05-03 00:43:02.116776 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:02.116980 | orchestrator | 2025-05-03 00:43:02.117769 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-03 00:43:02.117934 | orchestrator | Saturday 03 May 2025 00:43:02 +0000 (0:00:00.129) 0:01:09.720 ********** 2025-05-03 00:43:02.249140 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:02.251351 | orchestrator | 2025-05-03 00:43:02.389816 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-03 00:43:02.389964 | orchestrator | Saturday 03 May 2025 00:43:02 +0000 (0:00:00.131) 0:01:09.851 ********** 2025-05-03 00:43:02.389998 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:02.390393 | orchestrator | 2025-05-03 00:43:02.390789 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-03 00:43:02.391633 | orchestrator | Saturday 03 May 2025 00:43:02 +0000 (0:00:00.141) 0:01:09.993 ********** 2025-05-03 00:43:02.534438 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:02.534790 | orchestrator | 2025-05-03 00:43:02.535285 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-03 00:43:02.536219 | orchestrator | Saturday 03 May 2025 00:43:02 +0000 (0:00:00.142) 0:01:10.136 ********** 2025-05-03 00:43:02.671207 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:02.672020 | orchestrator | 2025-05-03 00:43:02.672156 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-03 00:43:02.672384 | orchestrator | Saturday 03 May 2025 00:43:02 +0000 (0:00:00.138) 0:01:10.274 ********** 2025-05-03 00:43:02.806701 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:02.807116 | orchestrator | 2025-05-03 00:43:02.807170 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-03 00:43:02.807705 | orchestrator | Saturday 03 May 2025 00:43:02 +0000 (0:00:00.135) 0:01:10.410 ********** 2025-05-03 00:43:02.948086 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:02.949140 | orchestrator | 2025-05-03 00:43:02.949240 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-03 00:43:02.949271 | orchestrator | Saturday 03 May 2025 00:43:02 +0000 (0:00:00.141) 0:01:10.551 ********** 2025-05-03 00:43:03.089763 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:03.090263 | orchestrator | 2025-05-03 00:43:03.091014 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-03 00:43:03.091761 | orchestrator | Saturday 03 May 2025 00:43:03 +0000 (0:00:00.141) 0:01:10.693 ********** 2025-05-03 00:43:03.237618 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:03.238164 | orchestrator | 2025-05-03 00:43:03.238640 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-03 00:43:03.239495 | orchestrator | Saturday 03 May 2025 00:43:03 +0000 (0:00:00.147) 0:01:10.841 ********** 2025-05-03 00:43:03.379705 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:03.380005 | orchestrator | 2025-05-03 00:43:03.381035 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-03 00:43:03.381616 | orchestrator | Saturday 03 May 2025 00:43:03 +0000 (0:00:00.140) 0:01:10.981 ********** 2025-05-03 00:43:03.516252 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:03.516491 | orchestrator | 2025-05-03 00:43:03.517622 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-03 00:43:03.518358 | orchestrator | Saturday 03 May 2025 00:43:03 +0000 (0:00:00.137) 0:01:11.118 ********** 2025-05-03 00:43:03.847950 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:03.848555 | orchestrator | 2025-05-03 00:43:03.849407 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-03 00:43:03.850429 | orchestrator | Saturday 03 May 2025 00:43:03 +0000 (0:00:00.332) 0:01:11.451 ********** 2025-05-03 00:43:03.986452 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:03.986693 | orchestrator | 2025-05-03 00:43:03.987805 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-03 00:43:03.988406 | orchestrator | Saturday 03 May 2025 00:43:03 +0000 (0:00:00.137) 0:01:11.589 ********** 2025-05-03 00:43:04.134668 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:04.136749 | orchestrator | 2025-05-03 00:43:04.136800 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-03 00:43:04.139282 | orchestrator | Saturday 03 May 2025 00:43:04 +0000 (0:00:00.149) 0:01:11.738 ********** 2025-05-03 00:43:04.329744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:04.329953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:04.330592 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:04.330740 | orchestrator | 2025-05-03 00:43:04.331240 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-03 00:43:04.332058 | orchestrator | Saturday 03 May 2025 00:43:04 +0000 (0:00:00.194) 0:01:11.932 ********** 2025-05-03 00:43:04.496208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:04.496403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:04.496702 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:04.498899 | orchestrator | 2025-05-03 00:43:04.501347 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-03 00:43:04.501568 | orchestrator | Saturday 03 May 2025 00:43:04 +0000 (0:00:00.166) 0:01:12.099 ********** 2025-05-03 00:43:04.664496 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:04.665094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:04.665520 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:04.665556 | orchestrator | 2025-05-03 00:43:04.665740 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-03 00:43:04.667413 | orchestrator | Saturday 03 May 2025 00:43:04 +0000 (0:00:00.168) 0:01:12.268 ********** 2025-05-03 00:43:04.840398 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:04.840733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:04.842119 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:04.843376 | orchestrator | 2025-05-03 00:43:04.844711 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-03 00:43:04.845419 | orchestrator | Saturday 03 May 2025 00:43:04 +0000 (0:00:00.175) 0:01:12.443 ********** 2025-05-03 00:43:05.015116 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:05.018572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:05.018652 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:05.018667 | orchestrator | 2025-05-03 00:43:05.018680 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-03 00:43:05.019139 | orchestrator | Saturday 03 May 2025 00:43:05 +0000 (0:00:00.171) 0:01:12.615 ********** 2025-05-03 00:43:05.160931 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:05.161383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:05.162478 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:05.162619 | orchestrator | 2025-05-03 00:43:05.163745 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-03 00:43:05.164027 | orchestrator | Saturday 03 May 2025 00:43:05 +0000 (0:00:00.149) 0:01:12.765 ********** 2025-05-03 00:43:05.320594 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:05.321427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:05.322433 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:05.323241 | orchestrator | 2025-05-03 00:43:05.324635 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-03 00:43:05.325621 | orchestrator | Saturday 03 May 2025 00:43:05 +0000 (0:00:00.156) 0:01:12.921 ********** 2025-05-03 00:43:05.482971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:05.484722 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:05.485210 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:05.487173 | orchestrator | 2025-05-03 00:43:05.487802 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-03 00:43:05.489198 | orchestrator | Saturday 03 May 2025 00:43:05 +0000 (0:00:00.164) 0:01:13.086 ********** 2025-05-03 00:43:06.200750 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:06.201364 | orchestrator | 2025-05-03 00:43:06.202126 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-03 00:43:06.203177 | orchestrator | Saturday 03 May 2025 00:43:06 +0000 (0:00:00.716) 0:01:13.803 ********** 2025-05-03 00:43:06.706198 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:06.706418 | orchestrator | 2025-05-03 00:43:06.706701 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-03 00:43:06.707240 | orchestrator | Saturday 03 May 2025 00:43:06 +0000 (0:00:00.504) 0:01:14.307 ********** 2025-05-03 00:43:06.858351 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:06.858557 | orchestrator | 2025-05-03 00:43:06.859484 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-03 00:43:06.860282 | orchestrator | Saturday 03 May 2025 00:43:06 +0000 (0:00:00.154) 0:01:14.462 ********** 2025-05-03 00:43:07.053084 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'vg_name': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'}) 2025-05-03 00:43:07.053475 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'vg_name': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'}) 2025-05-03 00:43:07.054464 | orchestrator | 2025-05-03 00:43:07.054735 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-03 00:43:07.055034 | orchestrator | Saturday 03 May 2025 00:43:07 +0000 (0:00:00.192) 0:01:14.655 ********** 2025-05-03 00:43:07.226282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:07.227393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:07.228106 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:07.230326 | orchestrator | 2025-05-03 00:43:07.230888 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-03 00:43:07.230934 | orchestrator | Saturday 03 May 2025 00:43:07 +0000 (0:00:00.174) 0:01:14.829 ********** 2025-05-03 00:43:07.396225 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:07.396958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:07.397034 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:07.399332 | orchestrator | 2025-05-03 00:43:07.400355 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-03 00:43:07.401685 | orchestrator | Saturday 03 May 2025 00:43:07 +0000 (0:00:00.169) 0:01:14.999 ********** 2025-05-03 00:43:07.572813 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'})  2025-05-03 00:43:07.573707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'})  2025-05-03 00:43:07.573748 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:07.573766 | orchestrator | 2025-05-03 00:43:07.573782 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-03 00:43:07.573806 | orchestrator | Saturday 03 May 2025 00:43:07 +0000 (0:00:00.175) 0:01:15.175 ********** 2025-05-03 00:43:08.141241 | orchestrator | ok: [testbed-node-5] => { 2025-05-03 00:43:08.141677 | orchestrator |  "lvm_report": { 2025-05-03 00:43:08.142516 | orchestrator |  "lv": [ 2025-05-03 00:43:08.143595 | orchestrator |  { 2025-05-03 00:43:08.144599 | orchestrator |  "lv_name": "osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553", 2025-05-03 00:43:08.144711 | orchestrator |  "vg_name": "ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553" 2025-05-03 00:43:08.145400 | orchestrator |  }, 2025-05-03 00:43:08.145651 | orchestrator |  { 2025-05-03 00:43:08.147040 | orchestrator |  "lv_name": "osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8", 2025-05-03 00:43:08.147570 | orchestrator |  "vg_name": "ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8" 2025-05-03 00:43:08.147956 | orchestrator |  } 2025-05-03 00:43:08.148473 | orchestrator |  ], 2025-05-03 00:43:08.148666 | orchestrator |  "pv": [ 2025-05-03 00:43:08.149678 | orchestrator |  { 2025-05-03 00:43:08.150460 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-03 00:43:08.151114 | orchestrator |  "vg_name": "ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553" 2025-05-03 00:43:08.151199 | orchestrator |  }, 2025-05-03 00:43:08.152424 | orchestrator |  { 2025-05-03 00:43:08.152590 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-03 00:43:08.152620 | orchestrator |  "vg_name": "ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8" 2025-05-03 00:43:08.152956 | orchestrator |  } 2025-05-03 00:43:08.153416 | orchestrator |  ] 2025-05-03 00:43:08.153873 | orchestrator |  } 2025-05-03 00:43:08.154347 | orchestrator | } 2025-05-03 00:43:08.154609 | orchestrator | 2025-05-03 00:43:08.156073 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:43:08.156194 | orchestrator | 2025-05-03 00:43:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:43:08.156324 | orchestrator | 2025-05-03 00:43:08 | INFO  | Please wait and do not abort execution. 2025-05-03 00:43:08.156967 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-03 00:43:08.157175 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-03 00:43:08.157206 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-03 00:43:08.157561 | orchestrator | 2025-05-03 00:43:08.157928 | orchestrator | 2025-05-03 00:43:08.158367 | orchestrator | 2025-05-03 00:43:08.158610 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:43:08.159148 | orchestrator | Saturday 03 May 2025 00:43:08 +0000 (0:00:00.569) 0:01:15.745 ********** 2025-05-03 00:43:08.159897 | orchestrator | =============================================================================== 2025-05-03 00:43:08.160153 | orchestrator | Create block VGs -------------------------------------------------------- 5.88s 2025-05-03 00:43:08.161423 | orchestrator | Create block LVs -------------------------------------------------------- 4.08s 2025-05-03 00:43:08.162187 | orchestrator | Print LVM report data --------------------------------------------------- 2.11s 2025-05-03 00:43:08.162222 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.98s 2025-05-03 00:43:08.162517 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.76s 2025-05-03 00:43:08.163748 | orchestrator | Add known links to the list of available block devices ------------------ 1.68s 2025-05-03 00:43:08.163995 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.52s 2025-05-03 00:43:08.165463 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2025-05-03 00:43:08.165659 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-05-03 00:43:08.165989 | orchestrator | Add known partitions to the list of available block devices ------------- 1.41s 2025-05-03 00:43:08.167422 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.09s 2025-05-03 00:43:08.167602 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-05-03 00:43:08.167632 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-03 00:43:08.167913 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.81s 2025-05-03 00:43:08.169018 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.75s 2025-05-03 00:43:08.169743 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2025-05-03 00:43:08.169904 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-05-03 00:43:08.170187 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-05-03 00:43:08.170886 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.65s 2025-05-03 00:43:08.171279 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-03 00:43:09.997414 | orchestrator | 2025-05-03 00:43:09 | INFO  | Task 578fb083-4480-4d27-8898-ffdc726ce580 (facts) was prepared for execution. 2025-05-03 00:43:13.019604 | orchestrator | 2025-05-03 00:43:09 | INFO  | It takes a moment until task 578fb083-4480-4d27-8898-ffdc726ce580 (facts) has been started and output is visible here. 2025-05-03 00:43:13.019773 | orchestrator | 2025-05-03 00:43:13.020089 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-03 00:43:13.020862 | orchestrator | 2025-05-03 00:43:13.022198 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-03 00:43:13.023000 | orchestrator | Saturday 03 May 2025 00:43:13 +0000 (0:00:00.190) 0:00:00.190 ********** 2025-05-03 00:43:13.887211 | orchestrator | ok: [testbed-manager] 2025-05-03 00:43:13.890378 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:43:13.890869 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:43:13.890900 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:43:13.890915 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:43:13.890936 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:43:13.891473 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:13.892295 | orchestrator | 2025-05-03 00:43:13.892761 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-03 00:43:13.893609 | orchestrator | Saturday 03 May 2025 00:43:13 +0000 (0:00:00.867) 0:00:01.058 ********** 2025-05-03 00:43:14.026703 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:43:14.100625 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:43:14.171329 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:43:14.242249 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:43:14.312216 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:43:14.935316 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:43:14.935658 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:14.939246 | orchestrator | 2025-05-03 00:43:14.939456 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-03 00:43:14.939484 | orchestrator | 2025-05-03 00:43:14.939501 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-03 00:43:14.939521 | orchestrator | Saturday 03 May 2025 00:43:14 +0000 (0:00:01.051) 0:00:02.109 ********** 2025-05-03 00:43:19.324734 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:43:19.324956 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:43:19.325693 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:43:19.326920 | orchestrator | ok: [testbed-manager] 2025-05-03 00:43:19.327930 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:43:19.331440 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:43:19.332603 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:43:19.332645 | orchestrator | 2025-05-03 00:43:19.332662 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-03 00:43:19.332679 | orchestrator | 2025-05-03 00:43:19.332702 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-03 00:43:19.639408 | orchestrator | Saturday 03 May 2025 00:43:19 +0000 (0:00:04.388) 0:00:06.498 ********** 2025-05-03 00:43:19.639540 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:43:19.716626 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:43:19.787061 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:43:19.868283 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:43:19.942904 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:43:19.971425 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:43:19.971905 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:43:19.972916 | orchestrator | 2025-05-03 00:43:19.973818 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:43:19.974364 | orchestrator | 2025-05-03 00:43:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-03 00:43:19.974640 | orchestrator | 2025-05-03 00:43:19 | INFO  | Please wait and do not abort execution. 2025-05-03 00:43:19.975442 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:43:19.976236 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:43:19.977111 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:43:19.977727 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:43:19.978188 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:43:19.978700 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:43:19.979224 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:43:19.980069 | orchestrator | 2025-05-03 00:43:19.980510 | orchestrator | Saturday 03 May 2025 00:43:19 +0000 (0:00:00.648) 0:00:07.146 ********** 2025-05-03 00:43:19.981053 | orchestrator | =============================================================================== 2025-05-03 00:43:19.981543 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.39s 2025-05-03 00:43:19.982074 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2025-05-03 00:43:19.982486 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.87s 2025-05-03 00:43:19.982982 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2025-05-03 00:43:20.492337 | orchestrator | 2025-05-03 00:43:20.494920 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat May 3 00:43:20 UTC 2025 2025-05-03 00:43:20.495031 | orchestrator | 2025-05-03 00:43:21.866125 | orchestrator | 2025-05-03 00:43:21 | INFO  | Collection nutshell is prepared for execution 2025-05-03 00:43:21.870495 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [0] - dotfiles 2025-05-03 00:43:21.870572 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [0] - homer 2025-05-03 00:43:21.871718 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [0] - netdata 2025-05-03 00:43:21.871744 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [0] - openstackclient 2025-05-03 00:43:21.871759 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [0] - phpmyadmin 2025-05-03 00:43:21.871773 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [0] - common 2025-05-03 00:43:21.871793 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [1] -- loadbalancer 2025-05-03 00:43:21.872119 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [2] --- opensearch 2025-05-03 00:43:21.872147 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [2] --- mariadb-ng 2025-05-03 00:43:21.872169 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [3] ---- horizon 2025-05-03 00:43:21.872588 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [3] ---- keystone 2025-05-03 00:43:21.872614 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [4] ----- neutron 2025-05-03 00:43:21.872630 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [5] ------ wait-for-nova 2025-05-03 00:43:21.872647 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [5] ------ octavia 2025-05-03 00:43:21.872667 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [4] ----- barbican 2025-05-03 00:43:21.873338 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [4] ----- designate 2025-05-03 00:43:21.873364 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [4] ----- ironic 2025-05-03 00:43:21.873381 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [4] ----- placement 2025-05-03 00:43:21.873398 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [4] ----- magnum 2025-05-03 00:43:21.873419 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [1] -- openvswitch 2025-05-03 00:43:21.873742 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [2] --- ovn 2025-05-03 00:43:21.873768 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [1] -- memcached 2025-05-03 00:43:21.873789 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [1] -- redis 2025-05-03 00:43:21.875022 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [1] -- rabbitmq-ng 2025-05-03 00:43:21.875049 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [0] - kubernetes 2025-05-03 00:43:21.875064 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [1] -- kubeconfig 2025-05-03 00:43:21.875079 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [1] -- copy-kubeconfig 2025-05-03 00:43:21.875094 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [0] - ceph 2025-05-03 00:43:21.875115 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [1] -- ceph-pools 2025-05-03 00:43:21.875501 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [2] --- copy-ceph-keys 2025-05-03 00:43:21.875526 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [3] ---- cephclient 2025-05-03 00:43:21.875547 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-03 00:43:21.875741 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [4] ----- wait-for-keystone 2025-05-03 00:43:21.875766 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-03 00:43:21.875805 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [5] ------ glance 2025-05-03 00:43:21.875821 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [5] ------ cinder 2025-05-03 00:43:21.875857 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [5] ------ nova 2025-05-03 00:43:21.875879 | orchestrator | 2025-05-03 00:43:21 | INFO  | A [4] ----- prometheus 2025-05-03 00:43:22.038581 | orchestrator | 2025-05-03 00:43:21 | INFO  | D [5] ------ grafana 2025-05-03 00:43:22.038705 | orchestrator | 2025-05-03 00:43:22 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-03 00:43:23.811482 | orchestrator | 2025-05-03 00:43:22 | INFO  | Tasks are running in the background 2025-05-03 00:43:23.811642 | orchestrator | 2025-05-03 00:43:23 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-03 00:43:25.941971 | orchestrator | 2025-05-03 00:43:25 | INFO  | Task dbc9d536-781b-4738-bd24-bf228ac717fa is in state STARTED 2025-05-03 00:43:25.942247 | orchestrator | 2025-05-03 00:43:25 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:25.942285 | orchestrator | 2025-05-03 00:43:25 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:25.942603 | orchestrator | 2025-05-03 00:43:25 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:25.945898 | orchestrator | 2025-05-03 00:43:25 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:25.952441 | orchestrator | 2025-05-03 00:43:25 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:28.992753 | orchestrator | 2025-05-03 00:43:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:28.992941 | orchestrator | 2025-05-03 00:43:28 | INFO  | Task dbc9d536-781b-4738-bd24-bf228ac717fa is in state STARTED 2025-05-03 00:43:28.994588 | orchestrator | 2025-05-03 00:43:28 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:28.994637 | orchestrator | 2025-05-03 00:43:28 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:28.994661 | orchestrator | 2025-05-03 00:43:28 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:28.995177 | orchestrator | 2025-05-03 00:43:28 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:28.995691 | orchestrator | 2025-05-03 00:43:28 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:32.044527 | orchestrator | 2025-05-03 00:43:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:32.044628 | orchestrator | 2025-05-03 00:43:32 | INFO  | Task dbc9d536-781b-4738-bd24-bf228ac717fa is in state STARTED 2025-05-03 00:43:32.047558 | orchestrator | 2025-05-03 00:43:32 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:32.047615 | orchestrator | 2025-05-03 00:43:32 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:32.048245 | orchestrator | 2025-05-03 00:43:32 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:32.048753 | orchestrator | 2025-05-03 00:43:32 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:32.052378 | orchestrator | 2025-05-03 00:43:32 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:35.137464 | orchestrator | 2025-05-03 00:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:35.137589 | orchestrator | 2025-05-03 00:43:35 | INFO  | Task dbc9d536-781b-4738-bd24-bf228ac717fa is in state STARTED 2025-05-03 00:43:35.146904 | orchestrator | 2025-05-03 00:43:35 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:38.218927 | orchestrator | 2025-05-03 00:43:35 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:38.219036 | orchestrator | 2025-05-03 00:43:35 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:38.219055 | orchestrator | 2025-05-03 00:43:35 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:38.219070 | orchestrator | 2025-05-03 00:43:35 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:38.219085 | orchestrator | 2025-05-03 00:43:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:38.219114 | orchestrator | 2025-05-03 00:43:38 | INFO  | Task dbc9d536-781b-4738-bd24-bf228ac717fa is in state STARTED 2025-05-03 00:43:38.219553 | orchestrator | 2025-05-03 00:43:38 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:38.220016 | orchestrator | 2025-05-03 00:43:38 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:38.220355 | orchestrator | 2025-05-03 00:43:38 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:38.221672 | orchestrator | 2025-05-03 00:43:38 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:38.223024 | orchestrator | 2025-05-03 00:43:38 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:41.265936 | orchestrator | 2025-05-03 00:43:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:41.266090 | orchestrator | 2025-05-03 00:43:41 | INFO  | Task dbc9d536-781b-4738-bd24-bf228ac717fa is in state SUCCESS 2025-05-03 00:43:41.268946 | orchestrator | 2025-05-03 00:43:41.268976 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-03 00:43:41.268988 | orchestrator | 2025-05-03 00:43:41.269000 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-03 00:43:41.269011 | orchestrator | Saturday 03 May 2025 00:43:30 +0000 (0:00:00.459) 0:00:00.459 ********** 2025-05-03 00:43:41.269022 | orchestrator | changed: [testbed-manager] 2025-05-03 00:43:41.269035 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:43:41.269046 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:43:41.269057 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:43:41.269068 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:43:41.269079 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:43:41.269091 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:43:41.269102 | orchestrator | 2025-05-03 00:43:41.269113 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-03 00:43:41.269129 | orchestrator | Saturday 03 May 2025 00:43:33 +0000 (0:00:03.830) 0:00:04.289 ********** 2025-05-03 00:43:41.269141 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-03 00:43:41.269153 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-03 00:43:41.269168 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-03 00:43:41.269179 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-03 00:43:41.269190 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-03 00:43:41.269201 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-03 00:43:41.269212 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-03 00:43:41.269223 | orchestrator | 2025-05-03 00:43:41.269234 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-03 00:43:41.269245 | orchestrator | Saturday 03 May 2025 00:43:35 +0000 (0:00:01.661) 0:00:05.951 ********** 2025-05-03 00:43:41.269274 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-03 00:43:34.650115', 'end': '2025-05-03 00:43:34.656832', 'delta': '0:00:00.006717', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-03 00:43:41.269293 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-03 00:43:34.710701', 'end': '2025-05-03 00:43:34.720406', 'delta': '0:00:00.009705', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-03 00:43:41.269306 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-03 00:43:34.675739', 'end': '2025-05-03 00:43:34.681940', 'delta': '0:00:00.006201', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-03 00:43:41.269333 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-03 00:43:34.655260', 'end': '2025-05-03 00:43:34.659875', 'delta': '0:00:00.004615', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-03 00:43:41.269346 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-03 00:43:34.785991', 'end': '2025-05-03 00:43:34.795701', 'delta': '0:00:00.009710', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-03 00:43:41.269363 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-03 00:43:34.975937', 'end': '2025-05-03 00:43:34.985339', 'delta': '0:00:00.009402', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-03 00:43:41.269379 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-03 00:43:35.097461', 'end': '2025-05-03 00:43:35.106291', 'delta': '0:00:00.008830', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-03 00:43:41.269390 | orchestrator | 2025-05-03 00:43:41.269402 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-03 00:43:41.269413 | orchestrator | Saturday 03 May 2025 00:43:37 +0000 (0:00:02.304) 0:00:08.256 ********** 2025-05-03 00:43:41.269424 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-03 00:43:41.269435 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-03 00:43:41.269446 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-03 00:43:41.269457 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-03 00:43:41.269468 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-03 00:43:41.269479 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-03 00:43:41.269490 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-03 00:43:41.269501 | orchestrator | 2025-05-03 00:43:41.269512 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:43:41.269523 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:43:41.269535 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:43:41.269547 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:43:41.269563 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:43:41.269593 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:43:41.269607 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:43:41.269619 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:43:41.269636 | orchestrator | 2025-05-03 00:43:41.269649 | orchestrator | Saturday 03 May 2025 00:43:40 +0000 (0:00:02.333) 0:00:10.589 ********** 2025-05-03 00:43:41.269663 | orchestrator | =============================================================================== 2025-05-03 00:43:41.269675 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.83s 2025-05-03 00:43:41.269688 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.33s 2025-05-03 00:43:41.269700 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.30s 2025-05-03 00:43:41.269713 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.66s 2025-05-03 00:43:41.269729 | orchestrator | 2025-05-03 00:43:41 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:41.272269 | orchestrator | 2025-05-03 00:43:41 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:41.274223 | orchestrator | 2025-05-03 00:43:41 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:41.279433 | orchestrator | 2025-05-03 00:43:41 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:41.285243 | orchestrator | 2025-05-03 00:43:41 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:44.337173 | orchestrator | 2025-05-03 00:43:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:44.337245 | orchestrator | 2025-05-03 00:43:44 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:44.346295 | orchestrator | 2025-05-03 00:43:44 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:44.348675 | orchestrator | 2025-05-03 00:43:44 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:43:44.348721 | orchestrator | 2025-05-03 00:43:44 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:44.353792 | orchestrator | 2025-05-03 00:43:44 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:44.361885 | orchestrator | 2025-05-03 00:43:44 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:47.418538 | orchestrator | 2025-05-03 00:43:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:47.418728 | orchestrator | 2025-05-03 00:43:47 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:47.418819 | orchestrator | 2025-05-03 00:43:47 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:47.418879 | orchestrator | 2025-05-03 00:43:47 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:43:47.421113 | orchestrator | 2025-05-03 00:43:47 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:47.423316 | orchestrator | 2025-05-03 00:43:47 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:47.424053 | orchestrator | 2025-05-03 00:43:47 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:50.501808 | orchestrator | 2025-05-03 00:43:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:50.501981 | orchestrator | 2025-05-03 00:43:50 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:50.503313 | orchestrator | 2025-05-03 00:43:50 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:50.503377 | orchestrator | 2025-05-03 00:43:50 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:43:50.506654 | orchestrator | 2025-05-03 00:43:50 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:50.511257 | orchestrator | 2025-05-03 00:43:50 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:50.511603 | orchestrator | 2025-05-03 00:43:50 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:50.512406 | orchestrator | 2025-05-03 00:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:53.569440 | orchestrator | 2025-05-03 00:43:53 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:53.571101 | orchestrator | 2025-05-03 00:43:53 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:53.571163 | orchestrator | 2025-05-03 00:43:53 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:43:53.571177 | orchestrator | 2025-05-03 00:43:53 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:53.571196 | orchestrator | 2025-05-03 00:43:53 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:53.572695 | orchestrator | 2025-05-03 00:43:53 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:56.641540 | orchestrator | 2025-05-03 00:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:56.641721 | orchestrator | 2025-05-03 00:43:56 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:56.642262 | orchestrator | 2025-05-03 00:43:56 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:56.644119 | orchestrator | 2025-05-03 00:43:56 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:43:56.646701 | orchestrator | 2025-05-03 00:43:56 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:56.647945 | orchestrator | 2025-05-03 00:43:56 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:56.648912 | orchestrator | 2025-05-03 00:43:56 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:43:59.720885 | orchestrator | 2025-05-03 00:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:43:59.721065 | orchestrator | 2025-05-03 00:43:59 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:43:59.721740 | orchestrator | 2025-05-03 00:43:59 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:43:59.721780 | orchestrator | 2025-05-03 00:43:59 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:43:59.725034 | orchestrator | 2025-05-03 00:43:59 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:43:59.729955 | orchestrator | 2025-05-03 00:43:59 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:43:59.732637 | orchestrator | 2025-05-03 00:43:59 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:02.784555 | orchestrator | 2025-05-03 00:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:02.784742 | orchestrator | 2025-05-03 00:44:02 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:02.786248 | orchestrator | 2025-05-03 00:44:02 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:02.786360 | orchestrator | 2025-05-03 00:44:02 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:02.786433 | orchestrator | 2025-05-03 00:44:02 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state STARTED 2025-05-03 00:44:02.787081 | orchestrator | 2025-05-03 00:44:02 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:44:05.855680 | orchestrator | 2025-05-03 00:44:02 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:05.855896 | orchestrator | 2025-05-03 00:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:05.855940 | orchestrator | 2025-05-03 00:44:05 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:05.860770 | orchestrator | 2025-05-03 00:44:05 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:05.864010 | orchestrator | 2025-05-03 00:44:05 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:05.866065 | orchestrator | 2025-05-03 00:44:05 | INFO  | Task 6f697e16-1695-4fba-80e4-038afed6b720 is in state SUCCESS 2025-05-03 00:44:05.866098 | orchestrator | 2025-05-03 00:44:05 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:44:05.866712 | orchestrator | 2025-05-03 00:44:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:05.869067 | orchestrator | 2025-05-03 00:44:05 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:08.933627 | orchestrator | 2025-05-03 00:44:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:08.933802 | orchestrator | 2025-05-03 00:44:08 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:08.933910 | orchestrator | 2025-05-03 00:44:08 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:08.936126 | orchestrator | 2025-05-03 00:44:08 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:08.936417 | orchestrator | 2025-05-03 00:44:08 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:44:08.936447 | orchestrator | 2025-05-03 00:44:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:08.938328 | orchestrator | 2025-05-03 00:44:08 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:11.986072 | orchestrator | 2025-05-03 00:44:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:11.986197 | orchestrator | 2025-05-03 00:44:11 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:11.988304 | orchestrator | 2025-05-03 00:44:11 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:11.988861 | orchestrator | 2025-05-03 00:44:11 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:11.988902 | orchestrator | 2025-05-03 00:44:11 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:44:11.988925 | orchestrator | 2025-05-03 00:44:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:11.991022 | orchestrator | 2025-05-03 00:44:11 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:15.046606 | orchestrator | 2025-05-03 00:44:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:15.046730 | orchestrator | 2025-05-03 00:44:15 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:15.048116 | orchestrator | 2025-05-03 00:44:15 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:15.048993 | orchestrator | 2025-05-03 00:44:15 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:15.050677 | orchestrator | 2025-05-03 00:44:15 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:44:15.051945 | orchestrator | 2025-05-03 00:44:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:15.053742 | orchestrator | 2025-05-03 00:44:15 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:18.113161 | orchestrator | 2025-05-03 00:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:18.113285 | orchestrator | 2025-05-03 00:44:18 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:18.122590 | orchestrator | 2025-05-03 00:44:18 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:18.122691 | orchestrator | 2025-05-03 00:44:18 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:21.172361 | orchestrator | 2025-05-03 00:44:18 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state STARTED 2025-05-03 00:44:21.172469 | orchestrator | 2025-05-03 00:44:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:21.172496 | orchestrator | 2025-05-03 00:44:18 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:21.172521 | orchestrator | 2025-05-03 00:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:21.172564 | orchestrator | 2025-05-03 00:44:21 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:21.174415 | orchestrator | 2025-05-03 00:44:21 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:21.174451 | orchestrator | 2025-05-03 00:44:21 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:21.174476 | orchestrator | 2025-05-03 00:44:21 | INFO  | Task 68de95a9-289b-4f98-ab0a-f09e1183ed7c is in state SUCCESS 2025-05-03 00:44:21.179493 | orchestrator | 2025-05-03 00:44:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:21.184950 | orchestrator | 2025-05-03 00:44:21 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:24.226318 | orchestrator | 2025-05-03 00:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:24.226406 | orchestrator | 2025-05-03 00:44:24 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:24.226552 | orchestrator | 2025-05-03 00:44:24 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:24.226935 | orchestrator | 2025-05-03 00:44:24 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:24.227449 | orchestrator | 2025-05-03 00:44:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:24.233323 | orchestrator | 2025-05-03 00:44:24 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:27.273194 | orchestrator | 2025-05-03 00:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:27.273338 | orchestrator | 2025-05-03 00:44:27 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:27.274304 | orchestrator | 2025-05-03 00:44:27 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:27.274951 | orchestrator | 2025-05-03 00:44:27 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:27.276233 | orchestrator | 2025-05-03 00:44:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:27.276690 | orchestrator | 2025-05-03 00:44:27 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:27.277937 | orchestrator | 2025-05-03 00:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:30.335608 | orchestrator | 2025-05-03 00:44:30 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:30.337627 | orchestrator | 2025-05-03 00:44:30 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:30.338217 | orchestrator | 2025-05-03 00:44:30 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:30.338705 | orchestrator | 2025-05-03 00:44:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:30.341217 | orchestrator | 2025-05-03 00:44:30 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:33.381623 | orchestrator | 2025-05-03 00:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:33.381766 | orchestrator | 2025-05-03 00:44:33 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:33.383024 | orchestrator | 2025-05-03 00:44:33 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:33.385981 | orchestrator | 2025-05-03 00:44:33 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:33.388082 | orchestrator | 2025-05-03 00:44:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:33.391405 | orchestrator | 2025-05-03 00:44:33 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state STARTED 2025-05-03 00:44:36.435959 | orchestrator | 2025-05-03 00:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:36.436138 | orchestrator | 2025-05-03 00:44:36 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:36.438154 | orchestrator | 2025-05-03 00:44:36 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:36.439310 | orchestrator | 2025-05-03 00:44:36 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:36.440317 | orchestrator | 2025-05-03 00:44:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:36.441073 | orchestrator | 2025-05-03 00:44:36 | INFO  | Task 2ab28bd4-20e6-4b26-8d08-612fd37dd748 is in state SUCCESS 2025-05-03 00:44:36.443107 | orchestrator | 2025-05-03 00:44:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:36.443438 | orchestrator | 2025-05-03 00:44:36.443466 | orchestrator | 2025-05-03 00:44:36.443481 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-03 00:44:36.443507 | orchestrator | 2025-05-03 00:44:36.443522 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-03 00:44:36.443537 | orchestrator | Saturday 03 May 2025 00:43:29 +0000 (0:00:00.379) 0:00:00.379 ********** 2025-05-03 00:44:36.443551 | orchestrator | ok: [testbed-manager] => { 2025-05-03 00:44:36.443567 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-03 00:44:36.443584 | orchestrator | } 2025-05-03 00:44:36.443647 | orchestrator | 2025-05-03 00:44:36.443664 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-03 00:44:36.443680 | orchestrator | Saturday 03 May 2025 00:43:29 +0000 (0:00:00.204) 0:00:00.583 ********** 2025-05-03 00:44:36.443696 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.443713 | orchestrator | 2025-05-03 00:44:36.443729 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-03 00:44:36.443765 | orchestrator | Saturday 03 May 2025 00:43:31 +0000 (0:00:01.329) 0:00:01.912 ********** 2025-05-03 00:44:36.443781 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-03 00:44:36.443797 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-03 00:44:36.443812 | orchestrator | 2025-05-03 00:44:36.443861 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-03 00:44:36.443877 | orchestrator | Saturday 03 May 2025 00:43:32 +0000 (0:00:00.959) 0:00:02.871 ********** 2025-05-03 00:44:36.443892 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.443908 | orchestrator | 2025-05-03 00:44:36.443925 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-03 00:44:36.443940 | orchestrator | Saturday 03 May 2025 00:43:34 +0000 (0:00:02.181) 0:00:05.053 ********** 2025-05-03 00:44:36.443953 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.443967 | orchestrator | 2025-05-03 00:44:36.443981 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-03 00:44:36.443994 | orchestrator | Saturday 03 May 2025 00:43:35 +0000 (0:00:01.380) 0:00:06.434 ********** 2025-05-03 00:44:36.444008 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-03 00:44:36.444022 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.444089 | orchestrator | 2025-05-03 00:44:36.444104 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-03 00:44:36.444118 | orchestrator | Saturday 03 May 2025 00:44:00 +0000 (0:00:24.945) 0:00:31.380 ********** 2025-05-03 00:44:36.444132 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.444145 | orchestrator | 2025-05-03 00:44:36.444159 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:44:36.444173 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.444189 | orchestrator | 2025-05-03 00:44:36.444203 | orchestrator | Saturday 03 May 2025 00:44:02 +0000 (0:00:02.364) 0:00:33.744 ********** 2025-05-03 00:44:36.444217 | orchestrator | =============================================================================== 2025-05-03 00:44:36.444231 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.95s 2025-05-03 00:44:36.444245 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.36s 2025-05-03 00:44:36.444258 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.18s 2025-05-03 00:44:36.444278 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.38s 2025-05-03 00:44:36.444292 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.33s 2025-05-03 00:44:36.444306 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.96s 2025-05-03 00:44:36.444320 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.20s 2025-05-03 00:44:36.444333 | orchestrator | 2025-05-03 00:44:36.444347 | orchestrator | 2025-05-03 00:44:36.444361 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-03 00:44:36.444374 | orchestrator | 2025-05-03 00:44:36.444388 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-03 00:44:36.444402 | orchestrator | Saturday 03 May 2025 00:43:28 +0000 (0:00:00.263) 0:00:00.263 ********** 2025-05-03 00:44:36.444416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-03 00:44:36.444432 | orchestrator | 2025-05-03 00:44:36.444445 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-03 00:44:36.444459 | orchestrator | Saturday 03 May 2025 00:43:29 +0000 (0:00:00.370) 0:00:00.633 ********** 2025-05-03 00:44:36.444473 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-03 00:44:36.444487 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-03 00:44:36.444509 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-03 00:44:36.444523 | orchestrator | 2025-05-03 00:44:36.444537 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-03 00:44:36.444551 | orchestrator | Saturday 03 May 2025 00:43:30 +0000 (0:00:01.166) 0:00:01.799 ********** 2025-05-03 00:44:36.444564 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.444578 | orchestrator | 2025-05-03 00:44:36.444592 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-03 00:44:36.444606 | orchestrator | Saturday 03 May 2025 00:43:31 +0000 (0:00:01.445) 0:00:03.245 ********** 2025-05-03 00:44:36.444619 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-03 00:44:36.444634 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.444648 | orchestrator | 2025-05-03 00:44:36.444673 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-03 00:44:36.444688 | orchestrator | Saturday 03 May 2025 00:44:10 +0000 (0:00:38.670) 0:00:41.916 ********** 2025-05-03 00:44:36.444702 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.444717 | orchestrator | 2025-05-03 00:44:36.444731 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-03 00:44:36.444745 | orchestrator | Saturday 03 May 2025 00:44:11 +0000 (0:00:01.139) 0:00:43.055 ********** 2025-05-03 00:44:36.444758 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.444772 | orchestrator | 2025-05-03 00:44:36.444786 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-03 00:44:36.444800 | orchestrator | Saturday 03 May 2025 00:44:12 +0000 (0:00:01.021) 0:00:44.077 ********** 2025-05-03 00:44:36.444863 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.444880 | orchestrator | 2025-05-03 00:44:36.444894 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-03 00:44:36.444908 | orchestrator | Saturday 03 May 2025 00:44:15 +0000 (0:00:03.185) 0:00:47.263 ********** 2025-05-03 00:44:36.444922 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.444935 | orchestrator | 2025-05-03 00:44:36.444949 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-03 00:44:36.444963 | orchestrator | Saturday 03 May 2025 00:44:17 +0000 (0:00:01.579) 0:00:48.843 ********** 2025-05-03 00:44:36.444976 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.444990 | orchestrator | 2025-05-03 00:44:36.445004 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-03 00:44:36.445050 | orchestrator | Saturday 03 May 2025 00:44:18 +0000 (0:00:00.835) 0:00:49.679 ********** 2025-05-03 00:44:36.445067 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.445081 | orchestrator | 2025-05-03 00:44:36.445095 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:44:36.445109 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.445152 | orchestrator | 2025-05-03 00:44:36.445170 | orchestrator | Saturday 03 May 2025 00:44:18 +0000 (0:00:00.428) 0:00:50.107 ********** 2025-05-03 00:44:36.445184 | orchestrator | =============================================================================== 2025-05-03 00:44:36.445198 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.67s 2025-05-03 00:44:36.445212 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.19s 2025-05-03 00:44:36.445226 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.58s 2025-05-03 00:44:36.445245 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.45s 2025-05-03 00:44:36.445259 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.16s 2025-05-03 00:44:36.445273 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.14s 2025-05-03 00:44:36.445319 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.02s 2025-05-03 00:44:36.445344 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.84s 2025-05-03 00:44:36.445358 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2025-05-03 00:44:36.445372 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.37s 2025-05-03 00:44:36.445386 | orchestrator | 2025-05-03 00:44:36.445400 | orchestrator | 2025-05-03 00:44:36.445414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:44:36.445427 | orchestrator | 2025-05-03 00:44:36.445441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:44:36.445455 | orchestrator | Saturday 03 May 2025 00:43:29 +0000 (0:00:00.261) 0:00:00.261 ********** 2025-05-03 00:44:36.445469 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-03 00:44:36.445483 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-03 00:44:36.445497 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-03 00:44:36.445510 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-03 00:44:36.445524 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-03 00:44:36.445538 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-03 00:44:36.445551 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-03 00:44:36.445565 | orchestrator | 2025-05-03 00:44:36.445579 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-03 00:44:36.445593 | orchestrator | 2025-05-03 00:44:36.445607 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-03 00:44:36.445621 | orchestrator | Saturday 03 May 2025 00:43:31 +0000 (0:00:01.879) 0:00:02.141 ********** 2025-05-03 00:44:36.445648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:44:36.445665 | orchestrator | 2025-05-03 00:44:36.445679 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-03 00:44:36.445693 | orchestrator | Saturday 03 May 2025 00:43:32 +0000 (0:00:01.550) 0:00:03.692 ********** 2025-05-03 00:44:36.445707 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:44:36.445720 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:44:36.445734 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.445748 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:44:36.445761 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:44:36.445775 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:44:36.445789 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:44:36.445802 | orchestrator | 2025-05-03 00:44:36.445816 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-03 00:44:36.445869 | orchestrator | Saturday 03 May 2025 00:43:35 +0000 (0:00:02.340) 0:00:06.033 ********** 2025-05-03 00:44:36.445885 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.445898 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:44:36.445912 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:44:36.445926 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:44:36.445939 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:44:36.445953 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:44:36.445967 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:44:36.445980 | orchestrator | 2025-05-03 00:44:36.445994 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-03 00:44:36.446008 | orchestrator | Saturday 03 May 2025 00:43:38 +0000 (0:00:03.427) 0:00:09.460 ********** 2025-05-03 00:44:36.446081 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.446102 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:44:36.446118 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:44:36.446132 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:44:36.446146 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:44:36.446160 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:44:36.446181 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:44:36.446195 | orchestrator | 2025-05-03 00:44:36.446209 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-03 00:44:36.446223 | orchestrator | Saturday 03 May 2025 00:43:41 +0000 (0:00:02.494) 0:00:11.954 ********** 2025-05-03 00:44:36.446237 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.446251 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:44:36.446265 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:44:36.446278 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:44:36.446292 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:44:36.446306 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:44:36.446319 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:44:36.446333 | orchestrator | 2025-05-03 00:44:36.446347 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-03 00:44:36.446361 | orchestrator | Saturday 03 May 2025 00:43:50 +0000 (0:00:08.970) 0:00:20.924 ********** 2025-05-03 00:44:36.446374 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:44:36.446392 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:44:36.446416 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:44:36.446439 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:44:36.446461 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:44:36.446484 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:44:36.446506 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.446531 | orchestrator | 2025-05-03 00:44:36.446555 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-03 00:44:36.446577 | orchestrator | Saturday 03 May 2025 00:44:09 +0000 (0:00:18.962) 0:00:39.887 ********** 2025-05-03 00:44:36.446592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:44:36.446612 | orchestrator | 2025-05-03 00:44:36.446626 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-03 00:44:36.446640 | orchestrator | Saturday 03 May 2025 00:44:11 +0000 (0:00:02.727) 0:00:42.614 ********** 2025-05-03 00:44:36.446654 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-03 00:44:36.446668 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-03 00:44:36.446682 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-03 00:44:36.446696 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-03 00:44:36.446709 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-03 00:44:36.446723 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-03 00:44:36.446737 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-03 00:44:36.446750 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-03 00:44:36.446764 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-03 00:44:36.446778 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-03 00:44:36.446791 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-03 00:44:36.446805 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-03 00:44:36.446840 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-03 00:44:36.446855 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-03 00:44:36.446869 | orchestrator | 2025-05-03 00:44:36.446883 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-03 00:44:36.446897 | orchestrator | Saturday 03 May 2025 00:44:18 +0000 (0:00:06.587) 0:00:49.201 ********** 2025-05-03 00:44:36.446921 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.446996 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:44:36.447014 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:44:36.447029 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:44:36.447043 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:44:36.447067 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:44:36.447081 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:44:36.447095 | orchestrator | 2025-05-03 00:44:36.447109 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-03 00:44:36.447123 | orchestrator | Saturday 03 May 2025 00:44:20 +0000 (0:00:02.137) 0:00:51.339 ********** 2025-05-03 00:44:36.447137 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.447151 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:44:36.447165 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:44:36.447178 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:44:36.447192 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:44:36.447206 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:44:36.447219 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:44:36.447233 | orchestrator | 2025-05-03 00:44:36.447247 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-03 00:44:36.447267 | orchestrator | Saturday 03 May 2025 00:44:22 +0000 (0:00:02.141) 0:00:53.480 ********** 2025-05-03 00:44:36.447281 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.447295 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:44:36.447309 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:44:36.447323 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:44:36.447346 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:44:36.447361 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:44:36.447375 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:44:36.447389 | orchestrator | 2025-05-03 00:44:36.447403 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-03 00:44:36.447417 | orchestrator | Saturday 03 May 2025 00:44:24 +0000 (0:00:02.075) 0:00:55.555 ********** 2025-05-03 00:44:36.447431 | orchestrator | ok: [testbed-manager] 2025-05-03 00:44:36.447444 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:44:36.447458 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:44:36.447472 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:44:36.447485 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:44:36.447499 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:44:36.447513 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:44:36.447527 | orchestrator | 2025-05-03 00:44:36.447541 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-03 00:44:36.447555 | orchestrator | Saturday 03 May 2025 00:44:26 +0000 (0:00:02.069) 0:00:57.625 ********** 2025-05-03 00:44:36.447569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-03 00:44:36.447586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:44:36.447600 | orchestrator | 2025-05-03 00:44:36.447614 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-03 00:44:36.447628 | orchestrator | Saturday 03 May 2025 00:44:28 +0000 (0:00:01.363) 0:00:58.988 ********** 2025-05-03 00:44:36.447641 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.447655 | orchestrator | 2025-05-03 00:44:36.447669 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-03 00:44:36.447683 | orchestrator | Saturday 03 May 2025 00:44:30 +0000 (0:00:02.042) 0:01:01.030 ********** 2025-05-03 00:44:36.447697 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:44:36.447711 | orchestrator | changed: [testbed-manager] 2025-05-03 00:44:36.447725 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:44:36.447748 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:44:36.447764 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:44:36.447778 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:44:36.447792 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:44:36.447806 | orchestrator | 2025-05-03 00:44:36.447879 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:44:36.447896 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.447918 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.447933 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.447953 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.447968 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.447981 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.447995 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:44:36.448009 | orchestrator | 2025-05-03 00:44:36.448023 | orchestrator | Saturday 03 May 2025 00:44:33 +0000 (0:00:03.242) 0:01:04.273 ********** 2025-05-03 00:44:36.448037 | orchestrator | =============================================================================== 2025-05-03 00:44:36.448051 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.96s 2025-05-03 00:44:36.448065 | orchestrator | osism.services.netdata : Add repository --------------------------------- 8.97s 2025-05-03 00:44:36.448078 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.59s 2025-05-03 00:44:36.448092 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.43s 2025-05-03 00:44:36.448106 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.24s 2025-05-03 00:44:36.448120 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.73s 2025-05-03 00:44:36.448133 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.49s 2025-05-03 00:44:36.448147 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.34s 2025-05-03 00:44:36.448161 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.14s 2025-05-03 00:44:36.448174 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.14s 2025-05-03 00:44:36.448188 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.08s 2025-05-03 00:44:36.448202 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.07s 2025-05-03 00:44:36.448215 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.04s 2025-05-03 00:44:36.448227 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.88s 2025-05-03 00:44:36.448245 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.55s 2025-05-03 00:44:39.480793 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.36s 2025-05-03 00:44:39.480982 | orchestrator | 2025-05-03 00:44:39 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:39.486433 | orchestrator | 2025-05-03 00:44:39 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:39.487383 | orchestrator | 2025-05-03 00:44:39 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:39.488891 | orchestrator | 2025-05-03 00:44:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:39.488969 | orchestrator | 2025-05-03 00:44:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:42.528523 | orchestrator | 2025-05-03 00:44:42 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:42.528738 | orchestrator | 2025-05-03 00:44:42 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:42.528771 | orchestrator | 2025-05-03 00:44:42 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:42.529270 | orchestrator | 2025-05-03 00:44:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:45.578678 | orchestrator | 2025-05-03 00:44:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:45.578805 | orchestrator | 2025-05-03 00:44:45 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:45.579289 | orchestrator | 2025-05-03 00:44:45 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:45.579324 | orchestrator | 2025-05-03 00:44:45 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state STARTED 2025-05-03 00:44:45.583455 | orchestrator | 2025-05-03 00:44:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:48.628706 | orchestrator | 2025-05-03 00:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:48.628906 | orchestrator | 2025-05-03 00:44:48 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:51.689802 | orchestrator | 2025-05-03 00:44:48 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:51.690072 | orchestrator | 2025-05-03 00:44:48 | INFO  | Task 903127ce-5bf1-40a9-af76-072334020d0c is in state SUCCESS 2025-05-03 00:44:51.690099 | orchestrator | 2025-05-03 00:44:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:51.690115 | orchestrator | 2025-05-03 00:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:51.690147 | orchestrator | 2025-05-03 00:44:51 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:51.690244 | orchestrator | 2025-05-03 00:44:51 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:51.691122 | orchestrator | 2025-05-03 00:44:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:54.790399 | orchestrator | 2025-05-03 00:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:54.790575 | orchestrator | 2025-05-03 00:44:54 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:54.831112 | orchestrator | 2025-05-03 00:44:54 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:57.878529 | orchestrator | 2025-05-03 00:44:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:57.878693 | orchestrator | 2025-05-03 00:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:44:57.878736 | orchestrator | 2025-05-03 00:44:57 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:44:57.878865 | orchestrator | 2025-05-03 00:44:57 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:44:57.879440 | orchestrator | 2025-05-03 00:44:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:44:57.879873 | orchestrator | 2025-05-03 00:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:00.925304 | orchestrator | 2025-05-03 00:45:00 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:00.926678 | orchestrator | 2025-05-03 00:45:00 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:00.927166 | orchestrator | 2025-05-03 00:45:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:03.968255 | orchestrator | 2025-05-03 00:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:03.968394 | orchestrator | 2025-05-03 00:45:03 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:03.969733 | orchestrator | 2025-05-03 00:45:03 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:03.970438 | orchestrator | 2025-05-03 00:45:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:07.015064 | orchestrator | 2025-05-03 00:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:07.015244 | orchestrator | 2025-05-03 00:45:07 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:07.015406 | orchestrator | 2025-05-03 00:45:07 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:07.016481 | orchestrator | 2025-05-03 00:45:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:10.076510 | orchestrator | 2025-05-03 00:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:10.076653 | orchestrator | 2025-05-03 00:45:10 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:10.076727 | orchestrator | 2025-05-03 00:45:10 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:10.076746 | orchestrator | 2025-05-03 00:45:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:10.076764 | orchestrator | 2025-05-03 00:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:13.123744 | orchestrator | 2025-05-03 00:45:13 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:13.126208 | orchestrator | 2025-05-03 00:45:13 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:13.127927 | orchestrator | 2025-05-03 00:45:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:16.190476 | orchestrator | 2025-05-03 00:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:16.190656 | orchestrator | 2025-05-03 00:45:16 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:16.190984 | orchestrator | 2025-05-03 00:45:16 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:16.193782 | orchestrator | 2025-05-03 00:45:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:19.245551 | orchestrator | 2025-05-03 00:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:19.245700 | orchestrator | 2025-05-03 00:45:19 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:19.249421 | orchestrator | 2025-05-03 00:45:19 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:19.251100 | orchestrator | 2025-05-03 00:45:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:22.299570 | orchestrator | 2025-05-03 00:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:22.299714 | orchestrator | 2025-05-03 00:45:22 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:22.300926 | orchestrator | 2025-05-03 00:45:22 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:22.302439 | orchestrator | 2025-05-03 00:45:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:22.303016 | orchestrator | 2025-05-03 00:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:25.355445 | orchestrator | 2025-05-03 00:45:25 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:28.403097 | orchestrator | 2025-05-03 00:45:25 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:28.403224 | orchestrator | 2025-05-03 00:45:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:28.403244 | orchestrator | 2025-05-03 00:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:28.403277 | orchestrator | 2025-05-03 00:45:28 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:28.406584 | orchestrator | 2025-05-03 00:45:28 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:28.406794 | orchestrator | 2025-05-03 00:45:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:28.407067 | orchestrator | 2025-05-03 00:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:31.448087 | orchestrator | 2025-05-03 00:45:31 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:31.448956 | orchestrator | 2025-05-03 00:45:31 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:31.450728 | orchestrator | 2025-05-03 00:45:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:31.450903 | orchestrator | 2025-05-03 00:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:34.507242 | orchestrator | 2025-05-03 00:45:34 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:34.512421 | orchestrator | 2025-05-03 00:45:34 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:37.566690 | orchestrator | 2025-05-03 00:45:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:37.566897 | orchestrator | 2025-05-03 00:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:37.566941 | orchestrator | 2025-05-03 00:45:37 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:37.567024 | orchestrator | 2025-05-03 00:45:37 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:37.567048 | orchestrator | 2025-05-03 00:45:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:40.610570 | orchestrator | 2025-05-03 00:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:40.610753 | orchestrator | 2025-05-03 00:45:40 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:40.611468 | orchestrator | 2025-05-03 00:45:40 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:40.612747 | orchestrator | 2025-05-03 00:45:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:43.661935 | orchestrator | 2025-05-03 00:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:43.662138 | orchestrator | 2025-05-03 00:45:43 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:43.663619 | orchestrator | 2025-05-03 00:45:43 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state STARTED 2025-05-03 00:45:43.665411 | orchestrator | 2025-05-03 00:45:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:46.731905 | orchestrator | 2025-05-03 00:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:46.732110 | orchestrator | 2025-05-03 00:45:46 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:46.732239 | orchestrator | 2025-05-03 00:45:46.732274 | orchestrator | 2025-05-03 00:45:46.732290 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-03 00:45:46.732304 | orchestrator | 2025-05-03 00:45:46.732318 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-03 00:45:46.732332 | orchestrator | Saturday 03 May 2025 00:43:45 +0000 (0:00:00.202) 0:00:00.202 ********** 2025-05-03 00:45:46.732346 | orchestrator | ok: [testbed-manager] 2025-05-03 00:45:46.732363 | orchestrator | 2025-05-03 00:45:46.732377 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-03 00:45:46.732391 | orchestrator | Saturday 03 May 2025 00:43:45 +0000 (0:00:00.793) 0:00:00.995 ********** 2025-05-03 00:45:46.732405 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-03 00:45:46.732437 | orchestrator | 2025-05-03 00:45:46.732451 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-03 00:45:46.732465 | orchestrator | Saturday 03 May 2025 00:43:46 +0000 (0:00:00.619) 0:00:01.614 ********** 2025-05-03 00:45:46.732479 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.732493 | orchestrator | 2025-05-03 00:45:46.732507 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-03 00:45:46.732521 | orchestrator | Saturday 03 May 2025 00:43:48 +0000 (0:00:01.586) 0:00:03.201 ********** 2025-05-03 00:45:46.732535 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-03 00:45:46.732549 | orchestrator | ok: [testbed-manager] 2025-05-03 00:45:46.732563 | orchestrator | 2025-05-03 00:45:46.732576 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-03 00:45:46.732590 | orchestrator | Saturday 03 May 2025 00:44:41 +0000 (0:00:53.695) 0:00:56.897 ********** 2025-05-03 00:45:46.732604 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.732618 | orchestrator | 2025-05-03 00:45:46.732631 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:45:46.732645 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:45:46.732660 | orchestrator | 2025-05-03 00:45:46.732675 | orchestrator | Saturday 03 May 2025 00:44:45 +0000 (0:00:03.539) 0:01:00.437 ********** 2025-05-03 00:45:46.732689 | orchestrator | =============================================================================== 2025-05-03 00:45:46.732703 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.70s 2025-05-03 00:45:46.732717 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.54s 2025-05-03 00:45:46.732738 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.59s 2025-05-03 00:45:46.732762 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.79s 2025-05-03 00:45:46.732783 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.62s 2025-05-03 00:45:46.732827 | orchestrator | 2025-05-03 00:45:46.732848 | orchestrator | 2025-05-03 00:45:46 | INFO  | Task a0b691bf-35f8-4f55-94bc-ce57f46f06ea is in state SUCCESS 2025-05-03 00:45:46.734091 | orchestrator | 2025-05-03 00:45:46.734144 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-03 00:45:46.734193 | orchestrator | 2025-05-03 00:45:46.734208 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-03 00:45:46.734221 | orchestrator | Saturday 03 May 2025 00:43:25 +0000 (0:00:00.326) 0:00:00.326 ********** 2025-05-03 00:45:46.734236 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:45:46.734251 | orchestrator | 2025-05-03 00:45:46.734265 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-03 00:45:46.734294 | orchestrator | Saturday 03 May 2025 00:43:26 +0000 (0:00:01.449) 0:00:01.776 ********** 2025-05-03 00:45:46.734308 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-03 00:45:46.734322 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-03 00:45:46.734337 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-03 00:45:46.734350 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-03 00:45:46.734364 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-03 00:45:46.734378 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-03 00:45:46.734391 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-03 00:45:46.734405 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-03 00:45:46.734419 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-03 00:45:46.734434 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-03 00:45:46.734448 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-03 00:45:46.734462 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-03 00:45:46.734476 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-03 00:45:46.734490 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-03 00:45:46.734503 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-03 00:45:46.734523 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-03 00:45:46.734537 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-03 00:45:46.734555 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-03 00:45:46.734570 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-03 00:45:46.734584 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-03 00:45:46.734598 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-03 00:45:46.734611 | orchestrator | 2025-05-03 00:45:46.734625 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-03 00:45:46.734639 | orchestrator | Saturday 03 May 2025 00:43:30 +0000 (0:00:03.855) 0:00:05.631 ********** 2025-05-03 00:45:46.734653 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:45:46.734673 | orchestrator | 2025-05-03 00:45:46.734687 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-03 00:45:46.734701 | orchestrator | Saturday 03 May 2025 00:43:32 +0000 (0:00:01.740) 0:00:07.371 ********** 2025-05-03 00:45:46.734720 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.734739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.734771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.734787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.734822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.734838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.734853 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.734868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.734882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.734911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.734926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.734940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.734955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735044 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.735160 | orchestrator | 2025-05-03 00:45:46.735174 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-03 00:45:46.735188 | orchestrator | Saturday 03 May 2025 00:43:37 +0000 (0:00:05.058) 0:00:12.430 ********** 2025-05-03 00:45:46.735203 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735218 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735248 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735263 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:45:46.735285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735329 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:45:46.735343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735392 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:45:46.735407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735503 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:45:46.735517 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:45:46.735531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735581 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:45:46.735609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735654 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:45:46.735668 | orchestrator | 2025-05-03 00:45:46.735682 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-03 00:45:46.735696 | orchestrator | Saturday 03 May 2025 00:43:38 +0000 (0:00:01.424) 0:00:13.855 ********** 2025-05-03 00:45:46.735711 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735725 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.735760 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:45:46.735774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.735813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.736611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736652 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:45:46.736666 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:45:46.736681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.736695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736724 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:45:46.736753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.736774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736864 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:45:46.736879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.736901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736930 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:45:46.736944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-03 00:45:46.736967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.736997 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:45:46.737011 | orchestrator | 2025-05-03 00:45:46.737025 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-03 00:45:46.737039 | orchestrator | Saturday 03 May 2025 00:43:42 +0000 (0:00:03.183) 0:00:17.038 ********** 2025-05-03 00:45:46.737053 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:45:46.737067 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:45:46.737081 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:45:46.737095 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:45:46.737109 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:45:46.737126 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:45:46.737141 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:45:46.737157 | orchestrator | 2025-05-03 00:45:46.737179 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-03 00:45:46.737196 | orchestrator | Saturday 03 May 2025 00:43:42 +0000 (0:00:00.862) 0:00:17.901 ********** 2025-05-03 00:45:46.737212 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:45:46.737227 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:45:46.737243 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:45:46.737258 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:45:46.737273 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:45:46.737353 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:45:46.737370 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:45:46.737383 | orchestrator | 2025-05-03 00:45:46.737397 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-03 00:45:46.737411 | orchestrator | Saturday 03 May 2025 00:43:43 +0000 (0:00:00.855) 0:00:18.756 ********** 2025-05-03 00:45:46.737425 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:45:46.737439 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:45:46.737454 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:45:46.737468 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:45:46.737480 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:45:46.737492 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:45:46.737504 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.737516 | orchestrator | 2025-05-03 00:45:46.737528 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-03 00:45:46.737540 | orchestrator | Saturday 03 May 2025 00:44:22 +0000 (0:00:38.501) 0:00:57.257 ********** 2025-05-03 00:45:46.737553 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:45:46.737564 | orchestrator | ok: [testbed-manager] 2025-05-03 00:45:46.737576 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:45:46.737588 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:45:46.737601 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:45:46.737613 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:45:46.737625 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:45:46.737643 | orchestrator | 2025-05-03 00:45:46.737656 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-03 00:45:46.737668 | orchestrator | Saturday 03 May 2025 00:44:24 +0000 (0:00:02.389) 0:00:59.647 ********** 2025-05-03 00:45:46.737680 | orchestrator | ok: [testbed-manager] 2025-05-03 00:45:46.737693 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:45:46.737705 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:45:46.737717 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:45:46.737729 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:45:46.737741 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:45:46.737753 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:45:46.737765 | orchestrator | 2025-05-03 00:45:46.737777 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-03 00:45:46.737790 | orchestrator | Saturday 03 May 2025 00:44:25 +0000 (0:00:01.225) 0:01:00.873 ********** 2025-05-03 00:45:46.737819 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:45:46.737832 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:45:46.737844 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:45:46.737856 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:45:46.737868 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:45:46.737880 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:45:46.737892 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:45:46.737904 | orchestrator | 2025-05-03 00:45:46.737916 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-03 00:45:46.737928 | orchestrator | Saturday 03 May 2025 00:44:26 +0000 (0:00:01.033) 0:01:01.907 ********** 2025-05-03 00:45:46.737940 | orchestrator | skipping: [testbed-manager] 2025-05-03 00:45:46.737952 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:45:46.737975 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:45:46.738074 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:45:46.738087 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:45:46.738100 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:45:46.738120 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:45:46.738132 | orchestrator | 2025-05-03 00:45:46.738145 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-03 00:45:46.738157 | orchestrator | Saturday 03 May 2025 00:44:27 +0000 (0:00:00.846) 0:01:02.754 ********** 2025-05-03 00:45:46.738178 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.738203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.738221 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.738248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.738261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.738274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.738292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.738315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738329 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.738517 | orchestrator | 2025-05-03 00:45:46.738529 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-03 00:45:46.738551 | orchestrator | Saturday 03 May 2025 00:44:31 +0000 (0:00:04.063) 0:01:06.817 ********** 2025-05-03 00:45:46.738564 | orchestrator | [WARNING]: Skipped 2025-05-03 00:45:46.738577 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-03 00:45:46.738589 | orchestrator | to this access issue: 2025-05-03 00:45:46.738602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-03 00:45:46.738614 | orchestrator | directory 2025-05-03 00:45:46.738626 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 00:45:46.738638 | orchestrator | 2025-05-03 00:45:46.738650 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-03 00:45:46.738663 | orchestrator | Saturday 03 May 2025 00:44:32 +0000 (0:00:00.689) 0:01:07.506 ********** 2025-05-03 00:45:46.738675 | orchestrator | [WARNING]: Skipped 2025-05-03 00:45:46.738692 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-03 00:45:46.738705 | orchestrator | to this access issue: 2025-05-03 00:45:46.738717 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-03 00:45:46.738729 | orchestrator | directory 2025-05-03 00:45:46.738741 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 00:45:46.738754 | orchestrator | 2025-05-03 00:45:46.738766 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-03 00:45:46.738778 | orchestrator | Saturday 03 May 2025 00:44:33 +0000 (0:00:00.706) 0:01:08.212 ********** 2025-05-03 00:45:46.738790 | orchestrator | [WARNING]: Skipped 2025-05-03 00:45:46.738869 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-03 00:45:46.738882 | orchestrator | to this access issue: 2025-05-03 00:45:46.738895 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-03 00:45:46.738907 | orchestrator | directory 2025-05-03 00:45:46.738919 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 00:45:46.738932 | orchestrator | 2025-05-03 00:45:46.738944 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-03 00:45:46.738962 | orchestrator | Saturday 03 May 2025 00:44:33 +0000 (0:00:00.554) 0:01:08.767 ********** 2025-05-03 00:45:46.738976 | orchestrator | [WARNING]: Skipped 2025-05-03 00:45:46.738988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-03 00:45:46.739001 | orchestrator | to this access issue: 2025-05-03 00:45:46.739013 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-03 00:45:46.739025 | orchestrator | directory 2025-05-03 00:45:46.739035 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 00:45:46.739045 | orchestrator | 2025-05-03 00:45:46.739056 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-03 00:45:46.739066 | orchestrator | Saturday 03 May 2025 00:44:34 +0000 (0:00:00.510) 0:01:09.278 ********** 2025-05-03 00:45:46.739076 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:45:46.739089 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.739106 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:45:46.739117 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:45:46.739127 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:45:46.739137 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:45:46.739147 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:45:46.739157 | orchestrator | 2025-05-03 00:45:46.739167 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-03 00:45:46.739177 | orchestrator | Saturday 03 May 2025 00:44:38 +0000 (0:00:03.904) 0:01:13.182 ********** 2025-05-03 00:45:46.739188 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-03 00:45:46.739198 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-03 00:45:46.739208 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-03 00:45:46.739225 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-03 00:45:46.739235 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-03 00:45:46.739245 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-03 00:45:46.739255 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-03 00:45:46.739265 | orchestrator | 2025-05-03 00:45:46.739276 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-03 00:45:46.739286 | orchestrator | Saturday 03 May 2025 00:44:40 +0000 (0:00:02.612) 0:01:15.794 ********** 2025-05-03 00:45:46.739296 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.739306 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:45:46.739316 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:45:46.739326 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:45:46.739335 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:45:46.739345 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:45:46.739355 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:45:46.739365 | orchestrator | 2025-05-03 00:45:46.739375 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-03 00:45:46.739385 | orchestrator | Saturday 03 May 2025 00:44:43 +0000 (0:00:02.617) 0:01:18.411 ********** 2025-05-03 00:45:46.739400 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739411 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.739422 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.739449 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739465 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.739480 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.739491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.739505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.739532 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.739558 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.739569 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.739590 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.739604 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:45:46.739636 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.739678 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.739690 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.739701 | orchestrator | 2025-05-03 00:45:46.739711 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-03 00:45:46.739721 | orchestrator | Saturday 03 May 2025 00:44:45 +0000 (0:00:02.450) 0:01:20.862 ********** 2025-05-03 00:45:46.739732 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-03 00:45:46.739742 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-03 00:45:46.739752 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-03 00:45:46.739762 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-03 00:45:46.739772 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-03 00:45:46.739782 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-03 00:45:46.739808 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-03 00:45:46.739819 | orchestrator | 2025-05-03 00:45:46.739829 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-03 00:45:46.739839 | orchestrator | Saturday 03 May 2025 00:44:48 +0000 (0:00:02.089) 0:01:22.952 ********** 2025-05-03 00:45:46.739849 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-03 00:45:46.739859 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-03 00:45:46.739870 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-03 00:45:46.739880 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-03 00:45:46.739889 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-03 00:45:46.739899 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-03 00:45:46.739910 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-03 00:45:46.739920 | orchestrator | 2025-05-03 00:45:46.739930 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-03 00:45:46.739940 | orchestrator | Saturday 03 May 2025 00:44:50 +0000 (0:00:02.415) 0:01:25.367 ********** 2025-05-03 00:45:46.739950 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.739997 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.740023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.740034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740063 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.740100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-03 00:45:46.740121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:45:46.740223 | orchestrator | 2025-05-03 00:45:46.740233 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-03 00:45:46.740244 | orchestrator | Saturday 03 May 2025 00:44:54 +0000 (0:00:04.115) 0:01:29.482 ********** 2025-05-03 00:45:46.740254 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.740264 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:45:46.740274 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:45:46.740284 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:45:46.740294 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:45:46.740304 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:45:46.740314 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:45:46.740324 | orchestrator | 2025-05-03 00:45:46.740334 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-03 00:45:46.740344 | orchestrator | Saturday 03 May 2025 00:44:56 +0000 (0:00:02.447) 0:01:31.929 ********** 2025-05-03 00:45:46.740354 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.740369 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:45:46.740384 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:45:46.740394 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:45:46.740404 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:45:46.740414 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:45:46.740424 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:45:46.740470 | orchestrator | 2025-05-03 00:45:46.740481 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-03 00:45:46.740492 | orchestrator | Saturday 03 May 2025 00:44:58 +0000 (0:00:01.540) 0:01:33.470 ********** 2025-05-03 00:45:46.740502 | orchestrator | 2025-05-03 00:45:46.740512 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-03 00:45:46.740522 | orchestrator | Saturday 03 May 2025 00:44:58 +0000 (0:00:00.052) 0:01:33.523 ********** 2025-05-03 00:45:46.740532 | orchestrator | 2025-05-03 00:45:46.740542 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-03 00:45:46.740552 | orchestrator | Saturday 03 May 2025 00:44:58 +0000 (0:00:00.067) 0:01:33.590 ********** 2025-05-03 00:45:46.740561 | orchestrator | 2025-05-03 00:45:46.740572 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-03 00:45:46.740582 | orchestrator | Saturday 03 May 2025 00:44:58 +0000 (0:00:00.053) 0:01:33.643 ********** 2025-05-03 00:45:46.740592 | orchestrator | 2025-05-03 00:45:46.740602 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-03 00:45:46.740612 | orchestrator | Saturday 03 May 2025 00:44:58 +0000 (0:00:00.193) 0:01:33.836 ********** 2025-05-03 00:45:46.740622 | orchestrator | 2025-05-03 00:45:46.740632 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-03 00:45:46.740642 | orchestrator | Saturday 03 May 2025 00:44:58 +0000 (0:00:00.052) 0:01:33.889 ********** 2025-05-03 00:45:46.740652 | orchestrator | 2025-05-03 00:45:46.740662 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-03 00:45:46.740672 | orchestrator | Saturday 03 May 2025 00:44:59 +0000 (0:00:00.049) 0:01:33.939 ********** 2025-05-03 00:45:46.740682 | orchestrator | 2025-05-03 00:45:46.740692 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-03 00:45:46.740702 | orchestrator | Saturday 03 May 2025 00:44:59 +0000 (0:00:00.066) 0:01:34.005 ********** 2025-05-03 00:45:46.740712 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:45:46.740726 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.740737 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:45:46.740747 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:45:46.740757 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:45:46.740767 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:45:46.740777 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:45:46.740787 | orchestrator | 2025-05-03 00:45:46.740813 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-03 00:45:46.740824 | orchestrator | Saturday 03 May 2025 00:45:06 +0000 (0:00:07.888) 0:01:41.894 ********** 2025-05-03 00:45:46.740834 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:45:46.740844 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:45:46.740853 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:45:46.740863 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:45:46.740873 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:45:46.740883 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.740893 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:45:46.740903 | orchestrator | 2025-05-03 00:45:46.740914 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-03 00:45:46.740924 | orchestrator | Saturday 03 May 2025 00:45:33 +0000 (0:00:26.506) 0:02:08.400 ********** 2025-05-03 00:45:46.740934 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:45:46.740944 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:45:46.740954 | orchestrator | ok: [testbed-manager] 2025-05-03 00:45:46.740964 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:45:46.740974 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:45:46.740997 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:45:46.741013 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:45:46.741024 | orchestrator | 2025-05-03 00:45:46.741034 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-03 00:45:46.741044 | orchestrator | Saturday 03 May 2025 00:45:35 +0000 (0:00:02.453) 0:02:10.853 ********** 2025-05-03 00:45:46.741054 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:45:46.741064 | orchestrator | changed: [testbed-manager] 2025-05-03 00:45:46.741074 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:45:46.741084 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:45:46.741094 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:45:46.741104 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:45:46.741114 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:45:46.741124 | orchestrator | 2025-05-03 00:45:46.741134 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:45:46.741146 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 00:45:46.741157 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 00:45:46.741168 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 00:45:46.741178 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 00:45:46.741188 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 00:45:46.741198 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 00:45:46.741208 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 00:45:46.741219 | orchestrator | 2025-05-03 00:45:46.741228 | orchestrator | 2025-05-03 00:45:46.741238 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:45:46.741249 | orchestrator | Saturday 03 May 2025 00:45:45 +0000 (0:00:09.386) 0:02:20.240 ********** 2025-05-03 00:45:46.741259 | orchestrator | =============================================================================== 2025-05-03 00:45:46.741269 | orchestrator | common : Ensure fluentd image is present for label check --------------- 38.50s 2025-05-03 00:45:46.741279 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 26.51s 2025-05-03 00:45:46.741293 | orchestrator | common : Restart cron container ----------------------------------------- 9.39s 2025-05-03 00:45:46.741304 | orchestrator | common : Restart fluentd container -------------------------------------- 7.89s 2025-05-03 00:45:46.741314 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.06s 2025-05-03 00:45:46.741324 | orchestrator | common : Check common containers ---------------------------------------- 4.12s 2025-05-03 00:45:46.741334 | orchestrator | common : Copying over config.json files for services -------------------- 4.06s 2025-05-03 00:45:46.741344 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 3.90s 2025-05-03 00:45:46.741354 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.86s 2025-05-03 00:45:46.741364 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.18s 2025-05-03 00:45:46.741374 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.62s 2025-05-03 00:45:46.741384 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.61s 2025-05-03 00:45:46.741395 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.45s 2025-05-03 00:45:46.741410 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.45s 2025-05-03 00:45:46.741425 | orchestrator | common : Creating log volume -------------------------------------------- 2.45s 2025-05-03 00:45:49.804275 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.42s 2025-05-03 00:45:49.804400 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.39s 2025-05-03 00:45:49.804418 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.09s 2025-05-03 00:45:49.804433 | orchestrator | common : include_tasks -------------------------------------------------- 1.74s 2025-05-03 00:45:49.804447 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.54s 2025-05-03 00:45:49.804462 | orchestrator | 2025-05-03 00:45:46 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:45:49.804477 | orchestrator | 2025-05-03 00:45:46 | INFO  | Task 8c222ff3-5360-43b7-a7f8-c1536d8f923b is in state STARTED 2025-05-03 00:45:49.804491 | orchestrator | 2025-05-03 00:45:46 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:45:49.804505 | orchestrator | 2025-05-03 00:45:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:49.804519 | orchestrator | 2025-05-03 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:49.804549 | orchestrator | 2025-05-03 00:45:49 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:49.805875 | orchestrator | 2025-05-03 00:45:49 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:45:49.807277 | orchestrator | 2025-05-03 00:45:49 | INFO  | Task 8c222ff3-5360-43b7-a7f8-c1536d8f923b is in state STARTED 2025-05-03 00:45:49.809144 | orchestrator | 2025-05-03 00:45:49 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:45:49.811533 | orchestrator | 2025-05-03 00:45:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:49.817190 | orchestrator | 2025-05-03 00:45:49 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:45:52.862973 | orchestrator | 2025-05-03 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:52.863097 | orchestrator | 2025-05-03 00:45:52 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:52.863541 | orchestrator | 2025-05-03 00:45:52 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:45:52.864044 | orchestrator | 2025-05-03 00:45:52 | INFO  | Task 8c222ff3-5360-43b7-a7f8-c1536d8f923b is in state STARTED 2025-05-03 00:45:52.866312 | orchestrator | 2025-05-03 00:45:52 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:45:52.866836 | orchestrator | 2025-05-03 00:45:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:52.867722 | orchestrator | 2025-05-03 00:45:52 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:45:55.895616 | orchestrator | 2025-05-03 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:55.895742 | orchestrator | 2025-05-03 00:45:55 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:55.896893 | orchestrator | 2025-05-03 00:45:55 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:45:55.897827 | orchestrator | 2025-05-03 00:45:55 | INFO  | Task 8c222ff3-5360-43b7-a7f8-c1536d8f923b is in state STARTED 2025-05-03 00:45:55.899122 | orchestrator | 2025-05-03 00:45:55 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:45:55.899984 | orchestrator | 2025-05-03 00:45:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:45:55.901046 | orchestrator | 2025-05-03 00:45:55 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:45:55.901125 | orchestrator | 2025-05-03 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:45:58.955861 | orchestrator | 2025-05-03 00:45:58 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:45:58.956240 | orchestrator | 2025-05-03 00:45:58 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:45:58.963140 | orchestrator | 2025-05-03 00:45:58 | INFO  | Task 8c222ff3-5360-43b7-a7f8-c1536d8f923b is in state STARTED 2025-05-03 00:46:02.009170 | orchestrator | 2025-05-03 00:45:58 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:02.009290 | orchestrator | 2025-05-03 00:45:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:02.009312 | orchestrator | 2025-05-03 00:45:58 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:46:02.009327 | orchestrator | 2025-05-03 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:02.009355 | orchestrator | 2025-05-03 00:46:02 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:02.012461 | orchestrator | 2025-05-03 00:46:02 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:02.016213 | orchestrator | 2025-05-03 00:46:02 | INFO  | Task 8c222ff3-5360-43b7-a7f8-c1536d8f923b is in state STARTED 2025-05-03 00:46:02.018294 | orchestrator | 2025-05-03 00:46:02 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:02.018906 | orchestrator | 2025-05-03 00:46:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:02.021064 | orchestrator | 2025-05-03 00:46:02 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:46:02.021477 | orchestrator | 2025-05-03 00:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:05.094327 | orchestrator | 2025-05-03 00:46:05 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:05.096516 | orchestrator | 2025-05-03 00:46:05 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:05.100317 | orchestrator | 2025-05-03 00:46:05 | INFO  | Task 8c222ff3-5360-43b7-a7f8-c1536d8f923b is in state STARTED 2025-05-03 00:46:05.103702 | orchestrator | 2025-05-03 00:46:05 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:05.108185 | orchestrator | 2025-05-03 00:46:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:05.109120 | orchestrator | 2025-05-03 00:46:05 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:46:08.155841 | orchestrator | 2025-05-03 00:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:08.156025 | orchestrator | 2025-05-03 00:46:08 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:08.157478 | orchestrator | 2025-05-03 00:46:08 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:08.159001 | orchestrator | 2025-05-03 00:46:08 | INFO  | Task 8c222ff3-5360-43b7-a7f8-c1536d8f923b is in state SUCCESS 2025-05-03 00:46:08.159038 | orchestrator | 2025-05-03 00:46:08 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:08.160119 | orchestrator | 2025-05-03 00:46:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:08.160573 | orchestrator | 2025-05-03 00:46:08 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:46:11.192277 | orchestrator | 2025-05-03 00:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:11.192390 | orchestrator | 2025-05-03 00:46:11 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:11.193357 | orchestrator | 2025-05-03 00:46:11 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:11.194877 | orchestrator | 2025-05-03 00:46:11 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:11.195620 | orchestrator | 2025-05-03 00:46:11 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:11.196486 | orchestrator | 2025-05-03 00:46:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:11.197378 | orchestrator | 2025-05-03 00:46:11 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:46:14.224491 | orchestrator | 2025-05-03 00:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:14.224609 | orchestrator | 2025-05-03 00:46:14 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:14.225085 | orchestrator | 2025-05-03 00:46:14 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:14.225730 | orchestrator | 2025-05-03 00:46:14 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:14.226426 | orchestrator | 2025-05-03 00:46:14 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:14.227362 | orchestrator | 2025-05-03 00:46:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:14.227992 | orchestrator | 2025-05-03 00:46:14 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:46:17.261198 | orchestrator | 2025-05-03 00:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:17.261323 | orchestrator | 2025-05-03 00:46:17 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:17.261425 | orchestrator | 2025-05-03 00:46:17 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:17.262003 | orchestrator | 2025-05-03 00:46:17 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:17.262762 | orchestrator | 2025-05-03 00:46:17 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:17.263260 | orchestrator | 2025-05-03 00:46:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:17.264017 | orchestrator | 2025-05-03 00:46:17 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state STARTED 2025-05-03 00:46:17.264826 | orchestrator | 2025-05-03 00:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:20.299523 | orchestrator | 2025-05-03 00:46:20 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:20.300409 | orchestrator | 2025-05-03 00:46:20 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:20.300452 | orchestrator | 2025-05-03 00:46:20 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:20.300766 | orchestrator | 2025-05-03 00:46:20 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:20.301379 | orchestrator | 2025-05-03 00:46:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:20.303408 | orchestrator | 2025-05-03 00:46:20 | INFO  | Task 4417862d-a28d-4646-9874-9860017b7ab1 is in state SUCCESS 2025-05-03 00:46:20.304460 | orchestrator | 2025-05-03 00:46:20.304490 | orchestrator | 2025-05-03 00:46:20.304506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:46:20.304523 | orchestrator | 2025-05-03 00:46:20.304537 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:46:20.304551 | orchestrator | Saturday 03 May 2025 00:45:49 +0000 (0:00:00.350) 0:00:00.350 ********** 2025-05-03 00:46:20.304565 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:46:20.304581 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:46:20.304595 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:46:20.304608 | orchestrator | 2025-05-03 00:46:20.304623 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:46:20.304637 | orchestrator | Saturday 03 May 2025 00:45:50 +0000 (0:00:00.523) 0:00:00.873 ********** 2025-05-03 00:46:20.304651 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-03 00:46:20.304665 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-03 00:46:20.304679 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-03 00:46:20.304693 | orchestrator | 2025-05-03 00:46:20.304707 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-03 00:46:20.304721 | orchestrator | 2025-05-03 00:46:20.304735 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-03 00:46:20.304749 | orchestrator | Saturday 03 May 2025 00:45:50 +0000 (0:00:00.458) 0:00:01.332 ********** 2025-05-03 00:46:20.304763 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:46:20.304808 | orchestrator | 2025-05-03 00:46:20.304822 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-03 00:46:20.304836 | orchestrator | Saturday 03 May 2025 00:45:51 +0000 (0:00:00.957) 0:00:02.289 ********** 2025-05-03 00:46:20.304850 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-03 00:46:20.304864 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-03 00:46:20.304878 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-03 00:46:20.304892 | orchestrator | 2025-05-03 00:46:20.304906 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-03 00:46:20.304920 | orchestrator | Saturday 03 May 2025 00:45:52 +0000 (0:00:00.882) 0:00:03.171 ********** 2025-05-03 00:46:20.304934 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-03 00:46:20.304948 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-03 00:46:20.304962 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-03 00:46:20.304976 | orchestrator | 2025-05-03 00:46:20.304990 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-03 00:46:20.305005 | orchestrator | Saturday 03 May 2025 00:45:54 +0000 (0:00:02.136) 0:00:05.308 ********** 2025-05-03 00:46:20.305019 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:46:20.305049 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:46:20.305063 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:46:20.305077 | orchestrator | 2025-05-03 00:46:20.305096 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-03 00:46:20.305110 | orchestrator | Saturday 03 May 2025 00:45:57 +0000 (0:00:02.793) 0:00:08.101 ********** 2025-05-03 00:46:20.305124 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:46:20.305138 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:46:20.305152 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:46:20.305166 | orchestrator | 2025-05-03 00:46:20.305180 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:46:20.305195 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:46:20.305223 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:46:20.305237 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:46:20.305251 | orchestrator | 2025-05-03 00:46:20.305265 | orchestrator | 2025-05-03 00:46:20.305279 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:46:20.305293 | orchestrator | Saturday 03 May 2025 00:46:06 +0000 (0:00:09.275) 0:00:17.376 ********** 2025-05-03 00:46:20.305307 | orchestrator | =============================================================================== 2025-05-03 00:46:20.305321 | orchestrator | memcached : Restart memcached container --------------------------------- 9.28s 2025-05-03 00:46:20.305335 | orchestrator | memcached : Check memcached container ----------------------------------- 2.79s 2025-05-03 00:46:20.305349 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.14s 2025-05-03 00:46:20.305363 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.96s 2025-05-03 00:46:20.305377 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.88s 2025-05-03 00:46:20.305391 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-05-03 00:46:20.305405 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-05-03 00:46:20.305419 | orchestrator | 2025-05-03 00:46:20.305433 | orchestrator | 2025-05-03 00:46:20.305447 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:46:20.305461 | orchestrator | 2025-05-03 00:46:20.305475 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:46:20.305489 | orchestrator | Saturday 03 May 2025 00:45:51 +0000 (0:00:00.289) 0:00:00.289 ********** 2025-05-03 00:46:20.305503 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:46:20.305517 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:46:20.305531 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:46:20.305545 | orchestrator | 2025-05-03 00:46:20.305560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:46:20.305583 | orchestrator | Saturday 03 May 2025 00:45:51 +0000 (0:00:00.400) 0:00:00.689 ********** 2025-05-03 00:46:20.305598 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-03 00:46:20.305612 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-03 00:46:20.305626 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-03 00:46:20.305640 | orchestrator | 2025-05-03 00:46:20.305654 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-03 00:46:20.305668 | orchestrator | 2025-05-03 00:46:20.305682 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-03 00:46:20.305696 | orchestrator | Saturday 03 May 2025 00:45:52 +0000 (0:00:00.448) 0:00:01.138 ********** 2025-05-03 00:46:20.305710 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:46:20.305724 | orchestrator | 2025-05-03 00:46:20.305745 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-03 00:46:20.305766 | orchestrator | Saturday 03 May 2025 00:45:52 +0000 (0:00:00.751) 0:00:01.890 ********** 2025-05-03 00:46:20.305800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.305828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.305844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.305859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.305875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.305905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.305921 | orchestrator | 2025-05-03 00:46:20.305936 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-03 00:46:20.305950 | orchestrator | Saturday 03 May 2025 00:45:54 +0000 (0:00:01.497) 0:00:03.387 ********** 2025-05-03 00:46:20.305964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.305986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306127 | orchestrator | 2025-05-03 00:46:20.306142 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-03 00:46:20.306156 | orchestrator | Saturday 03 May 2025 00:45:56 +0000 (0:00:02.496) 0:00:05.884 ********** 2025-05-03 00:46:20.306171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306274 | orchestrator | 2025-05-03 00:46:20.306288 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-03 00:46:20.306302 | orchestrator | Saturday 03 May 2025 00:45:59 +0000 (0:00:02.992) 0:00:08.877 ********** 2025-05-03 00:46:20.306317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:20.306402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-03 00:46:23.348655 | orchestrator | 2025-05-03 00:46:23.348981 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-03 00:46:23.349127 | orchestrator | Saturday 03 May 2025 00:46:02 +0000 (0:00:02.337) 0:00:11.214 ********** 2025-05-03 00:46:23.349198 | orchestrator | 2025-05-03 00:46:23.349229 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-03 00:46:23.349263 | orchestrator | Saturday 03 May 2025 00:46:02 +0000 (0:00:00.131) 0:00:11.346 ********** 2025-05-03 00:46:23.349296 | orchestrator | 2025-05-03 00:46:23.349322 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-03 00:46:23.349346 | orchestrator | Saturday 03 May 2025 00:46:02 +0000 (0:00:00.118) 0:00:11.465 ********** 2025-05-03 00:46:23.349371 | orchestrator | 2025-05-03 00:46:23.349397 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-03 00:46:23.349425 | orchestrator | Saturday 03 May 2025 00:46:02 +0000 (0:00:00.292) 0:00:11.757 ********** 2025-05-03 00:46:23.349451 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:46:23.349479 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:46:23.349505 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:46:23.349532 | orchestrator | 2025-05-03 00:46:23.349558 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-03 00:46:23.349585 | orchestrator | Saturday 03 May 2025 00:46:07 +0000 (0:00:04.381) 0:00:16.140 ********** 2025-05-03 00:46:23.349610 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:46:23.349635 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:46:23.349682 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:46:23.349709 | orchestrator | 2025-05-03 00:46:23.349735 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:46:23.349760 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:46:23.349813 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:46:23.349836 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:46:23.349860 | orchestrator | 2025-05-03 00:46:23.349898 | orchestrator | 2025-05-03 00:46:23.349927 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:46:23.349961 | orchestrator | Saturday 03 May 2025 00:46:17 +0000 (0:00:10.768) 0:00:26.908 ********** 2025-05-03 00:46:23.349984 | orchestrator | =============================================================================== 2025-05-03 00:46:23.350007 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.77s 2025-05-03 00:46:23.350122 | orchestrator | redis : Restart redis container ----------------------------------------- 4.38s 2025-05-03 00:46:23.350140 | orchestrator | redis : Copying over redis config files --------------------------------- 2.99s 2025-05-03 00:46:23.350154 | orchestrator | redis : Copying over default config.json files -------------------------- 2.50s 2025-05-03 00:46:23.350168 | orchestrator | redis : Check redis containers ------------------------------------------ 2.34s 2025-05-03 00:46:23.350182 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.50s 2025-05-03 00:46:23.350196 | orchestrator | redis : include_tasks --------------------------------------------------- 0.75s 2025-05-03 00:46:23.350210 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.54s 2025-05-03 00:46:23.350224 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-05-03 00:46:23.350237 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-05-03 00:46:23.350252 | orchestrator | 2025-05-03 00:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:23.350290 | orchestrator | 2025-05-03 00:46:23 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:26.399972 | orchestrator | 2025-05-03 00:46:23 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:26.400107 | orchestrator | 2025-05-03 00:46:23 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:26.400157 | orchestrator | 2025-05-03 00:46:23 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:26.400173 | orchestrator | 2025-05-03 00:46:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:26.400188 | orchestrator | 2025-05-03 00:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:26.400220 | orchestrator | 2025-05-03 00:46:26 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:26.401409 | orchestrator | 2025-05-03 00:46:26 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:26.402573 | orchestrator | 2025-05-03 00:46:26 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:26.402605 | orchestrator | 2025-05-03 00:46:26 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:26.403279 | orchestrator | 2025-05-03 00:46:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:29.441093 | orchestrator | 2025-05-03 00:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:29.441213 | orchestrator | 2025-05-03 00:46:29 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:29.446502 | orchestrator | 2025-05-03 00:46:29 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:29.447006 | orchestrator | 2025-05-03 00:46:29 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:29.447663 | orchestrator | 2025-05-03 00:46:29 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:29.449119 | orchestrator | 2025-05-03 00:46:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:32.479623 | orchestrator | 2025-05-03 00:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:32.479800 | orchestrator | 2025-05-03 00:46:32 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:32.480058 | orchestrator | 2025-05-03 00:46:32 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:32.480621 | orchestrator | 2025-05-03 00:46:32 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:32.481215 | orchestrator | 2025-05-03 00:46:32 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:32.481876 | orchestrator | 2025-05-03 00:46:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:32.482157 | orchestrator | 2025-05-03 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:35.522118 | orchestrator | 2025-05-03 00:46:35 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:35.522537 | orchestrator | 2025-05-03 00:46:35 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:35.523073 | orchestrator | 2025-05-03 00:46:35 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:35.524202 | orchestrator | 2025-05-03 00:46:35 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:35.525601 | orchestrator | 2025-05-03 00:46:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:38.558312 | orchestrator | 2025-05-03 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:38.558478 | orchestrator | 2025-05-03 00:46:38 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:38.558590 | orchestrator | 2025-05-03 00:46:38 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:38.559513 | orchestrator | 2025-05-03 00:46:38 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:38.560009 | orchestrator | 2025-05-03 00:46:38 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:38.560939 | orchestrator | 2025-05-03 00:46:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:41.606877 | orchestrator | 2025-05-03 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:41.606999 | orchestrator | 2025-05-03 00:46:41 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:41.607092 | orchestrator | 2025-05-03 00:46:41 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:41.607658 | orchestrator | 2025-05-03 00:46:41 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:41.608324 | orchestrator | 2025-05-03 00:46:41 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:41.608904 | orchestrator | 2025-05-03 00:46:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:44.654526 | orchestrator | 2025-05-03 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:44.655580 | orchestrator | 2025-05-03 00:46:44 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:44.655802 | orchestrator | 2025-05-03 00:46:44 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:44.656862 | orchestrator | 2025-05-03 00:46:44 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:44.658421 | orchestrator | 2025-05-03 00:46:44 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:44.659791 | orchestrator | 2025-05-03 00:46:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:47.714918 | orchestrator | 2025-05-03 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:47.715071 | orchestrator | 2025-05-03 00:46:47 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:47.715127 | orchestrator | 2025-05-03 00:46:47 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:47.715140 | orchestrator | 2025-05-03 00:46:47 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:47.715153 | orchestrator | 2025-05-03 00:46:47 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:47.717047 | orchestrator | 2025-05-03 00:46:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:50.764076 | orchestrator | 2025-05-03 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:50.764223 | orchestrator | 2025-05-03 00:46:50 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:50.764922 | orchestrator | 2025-05-03 00:46:50 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:50.764979 | orchestrator | 2025-05-03 00:46:50 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:50.768972 | orchestrator | 2025-05-03 00:46:50 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:50.769410 | orchestrator | 2025-05-03 00:46:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:53.822099 | orchestrator | 2025-05-03 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:53.822239 | orchestrator | 2025-05-03 00:46:53 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:53.822670 | orchestrator | 2025-05-03 00:46:53 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:53.822703 | orchestrator | 2025-05-03 00:46:53 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:53.822727 | orchestrator | 2025-05-03 00:46:53 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:53.823594 | orchestrator | 2025-05-03 00:46:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:56.877578 | orchestrator | 2025-05-03 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:56.877792 | orchestrator | 2025-05-03 00:46:56 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:56.878130 | orchestrator | 2025-05-03 00:46:56 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:56.878170 | orchestrator | 2025-05-03 00:46:56 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:56.878704 | orchestrator | 2025-05-03 00:46:56 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:56.879520 | orchestrator | 2025-05-03 00:46:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:46:59.921999 | orchestrator | 2025-05-03 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:46:59.922258 | orchestrator | 2025-05-03 00:46:59 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:46:59.922343 | orchestrator | 2025-05-03 00:46:59 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:46:59.923212 | orchestrator | 2025-05-03 00:46:59 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:46:59.924333 | orchestrator | 2025-05-03 00:46:59 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:46:59.925857 | orchestrator | 2025-05-03 00:46:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:02.970386 | orchestrator | 2025-05-03 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:02.970508 | orchestrator | 2025-05-03 00:47:02 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:02.971505 | orchestrator | 2025-05-03 00:47:02 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state STARTED 2025-05-03 00:47:02.975404 | orchestrator | 2025-05-03 00:47:02 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:02.975470 | orchestrator | 2025-05-03 00:47:02 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:02.976343 | orchestrator | 2025-05-03 00:47:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:02.977094 | orchestrator | 2025-05-03 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:06.029238 | orchestrator | 2025-05-03 00:47:06 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:06.029414 | orchestrator | 2025-05-03 00:47:06 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:06.030318 | orchestrator | 2025-05-03 00:47:06 | INFO  | Task 9a37dffd-f1da-42b3-89a6-3729c2e18934 is in state SUCCESS 2025-05-03 00:47:06.032043 | orchestrator | 2025-05-03 00:47:06.032095 | orchestrator | 2025-05-03 00:47:06.032140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:47:06.032152 | orchestrator | 2025-05-03 00:47:06.032160 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:47:06.032167 | orchestrator | Saturday 03 May 2025 00:45:50 +0000 (0:00:00.740) 0:00:00.740 ********** 2025-05-03 00:47:06.032175 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:47:06.032185 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:47:06.032192 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:47:06.032199 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:47:06.032206 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:47:06.032213 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:47:06.032220 | orchestrator | 2025-05-03 00:47:06.032228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:47:06.032235 | orchestrator | Saturday 03 May 2025 00:45:50 +0000 (0:00:00.661) 0:00:01.402 ********** 2025-05-03 00:47:06.032242 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-03 00:47:06.032250 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-03 00:47:06.032257 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-03 00:47:06.032265 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-03 00:47:06.032272 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-03 00:47:06.032285 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-03 00:47:06.032293 | orchestrator | 2025-05-03 00:47:06.032300 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-03 00:47:06.032307 | orchestrator | 2025-05-03 00:47:06.032314 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-03 00:47:06.032322 | orchestrator | Saturday 03 May 2025 00:45:51 +0000 (0:00:00.935) 0:00:02.338 ********** 2025-05-03 00:47:06.032331 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:47:06.032339 | orchestrator | 2025-05-03 00:47:06.032348 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-03 00:47:06.032356 | orchestrator | Saturday 03 May 2025 00:45:53 +0000 (0:00:01.610) 0:00:03.948 ********** 2025-05-03 00:47:06.032364 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-03 00:47:06.032373 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-03 00:47:06.032382 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-03 00:47:06.032391 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-03 00:47:06.032399 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-03 00:47:06.032408 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-03 00:47:06.032416 | orchestrator | 2025-05-03 00:47:06.032425 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-03 00:47:06.032433 | orchestrator | Saturday 03 May 2025 00:45:54 +0000 (0:00:01.411) 0:00:05.360 ********** 2025-05-03 00:47:06.032442 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-03 00:47:06.032460 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-03 00:47:06.032468 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-03 00:47:06.032476 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-03 00:47:06.032483 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-03 00:47:06.032490 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-03 00:47:06.032498 | orchestrator | 2025-05-03 00:47:06.032505 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-03 00:47:06.032512 | orchestrator | Saturday 03 May 2025 00:45:57 +0000 (0:00:02.497) 0:00:07.857 ********** 2025-05-03 00:47:06.032525 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-03 00:47:06.032533 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:47:06.032541 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-03 00:47:06.032548 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:47:06.032556 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-03 00:47:06.032563 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:47:06.032571 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-03 00:47:06.032578 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:47:06.032586 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-03 00:47:06.032593 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:47:06.032600 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-03 00:47:06.032607 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:47:06.032615 | orchestrator | 2025-05-03 00:47:06.032622 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-03 00:47:06.032631 | orchestrator | Saturday 03 May 2025 00:45:59 +0000 (0:00:01.935) 0:00:09.793 ********** 2025-05-03 00:47:06.032639 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:47:06.032648 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:47:06.032657 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:47:06.032665 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:47:06.032675 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:47:06.032684 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:47:06.032692 | orchestrator | 2025-05-03 00:47:06.032701 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-03 00:47:06.032710 | orchestrator | Saturday 03 May 2025 00:46:00 +0000 (0:00:01.152) 0:00:10.945 ********** 2025-05-03 00:47:06.032769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032816 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032901 | orchestrator | 2025-05-03 00:47:06.032910 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-03 00:47:06.032918 | orchestrator | Saturday 03 May 2025 00:46:02 +0000 (0:00:02.216) 0:00:13.161 ********** 2025-05-03 00:47:06.032926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.032998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033072 | orchestrator | 2025-05-03 00:47:06.033080 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-03 00:47:06.033087 | orchestrator | Saturday 03 May 2025 00:46:06 +0000 (0:00:03.747) 0:00:16.909 ********** 2025-05-03 00:47:06.033095 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:47:06.033103 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:47:06.033111 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:47:06.033122 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:47:06.033130 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:47:06.033138 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:47:06.033146 | orchestrator | 2025-05-03 00:47:06.033153 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-03 00:47:06.033162 | orchestrator | Saturday 03 May 2025 00:46:10 +0000 (0:00:04.257) 0:00:21.167 ********** 2025-05-03 00:47:06.033170 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:47:06.033178 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:47:06.033185 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:47:06.033193 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:47:06.033201 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:47:06.033247 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:47:06.033256 | orchestrator | 2025-05-03 00:47:06.033265 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-03 00:47:06.033273 | orchestrator | Saturday 03 May 2025 00:46:12 +0000 (0:00:02.196) 0:00:23.363 ********** 2025-05-03 00:47:06.033281 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:47:06.033290 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:47:06.033298 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:47:06.033308 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:47:06.033316 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:47:06.033325 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:47:06.033334 | orchestrator | 2025-05-03 00:47:06.033343 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-03 00:47:06.033352 | orchestrator | Saturday 03 May 2025 00:46:13 +0000 (0:00:01.171) 0:00:24.534 ********** 2025-05-03 00:47:06.033362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-03 00:47:06.033538 | orchestrator | 2025-05-03 00:47:06.033546 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-03 00:47:06.033554 | orchestrator | Saturday 03 May 2025 00:46:16 +0000 (0:00:02.564) 0:00:27.098 ********** 2025-05-03 00:47:06.033563 | orchestrator | 2025-05-03 00:47:06.033571 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-03 00:47:06.033579 | orchestrator | Saturday 03 May 2025 00:46:16 +0000 (0:00:00.109) 0:00:27.208 ********** 2025-05-03 00:47:06.033587 | orchestrator | 2025-05-03 00:47:06.033595 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-03 00:47:06.033603 | orchestrator | Saturday 03 May 2025 00:46:16 +0000 (0:00:00.235) 0:00:27.443 ********** 2025-05-03 00:47:06.033611 | orchestrator | 2025-05-03 00:47:06.033619 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-03 00:47:06.033627 | orchestrator | Saturday 03 May 2025 00:46:17 +0000 (0:00:00.108) 0:00:27.552 ********** 2025-05-03 00:47:06.033635 | orchestrator | 2025-05-03 00:47:06.033646 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-03 00:47:06.033655 | orchestrator | Saturday 03 May 2025 00:46:17 +0000 (0:00:00.231) 0:00:27.783 ********** 2025-05-03 00:47:06.033662 | orchestrator | 2025-05-03 00:47:06.033671 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-03 00:47:06.033679 | orchestrator | Saturday 03 May 2025 00:46:17 +0000 (0:00:00.117) 0:00:27.901 ********** 2025-05-03 00:47:06.033687 | orchestrator | 2025-05-03 00:47:06.033695 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-03 00:47:06.033708 | orchestrator | Saturday 03 May 2025 00:46:17 +0000 (0:00:00.217) 0:00:28.118 ********** 2025-05-03 00:47:06.033715 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:47:06.033723 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:47:06.033758 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:47:06.033765 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:47:06.033773 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:47:06.033780 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:47:06.033787 | orchestrator | 2025-05-03 00:47:06.033794 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-03 00:47:06.033802 | orchestrator | Saturday 03 May 2025 00:46:28 +0000 (0:00:10.824) 0:00:38.942 ********** 2025-05-03 00:47:06.033816 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:47:06.033824 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:47:06.033832 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:47:06.033841 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:47:06.033849 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:47:06.033857 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:47:06.033865 | orchestrator | 2025-05-03 00:47:06.033874 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-03 00:47:06.033881 | orchestrator | Saturday 03 May 2025 00:46:30 +0000 (0:00:02.224) 0:00:41.167 ********** 2025-05-03 00:47:06.033889 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:47:06.033897 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:47:06.033905 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:47:06.033920 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:47:06.033929 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:47:06.033937 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:47:06.033945 | orchestrator | 2025-05-03 00:47:06.033953 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-03 00:47:06.033961 | orchestrator | Saturday 03 May 2025 00:46:40 +0000 (0:00:09.679) 0:00:50.846 ********** 2025-05-03 00:47:06.033969 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-03 00:47:06.033978 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-03 00:47:06.033986 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-03 00:47:06.033994 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-03 00:47:06.034001 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-03 00:47:06.034008 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-03 00:47:06.034083 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-03 00:47:06.034094 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-03 00:47:06.034102 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-03 00:47:06.034110 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-03 00:47:06.034118 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-03 00:47:06.034126 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-03 00:47:06.034135 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-03 00:47:06.034143 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-03 00:47:06.034159 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-03 00:47:06.034167 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-03 00:47:06.034175 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-03 00:47:06.034188 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-03 00:47:06.034197 | orchestrator | 2025-05-03 00:47:06.034206 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-03 00:47:06.034214 | orchestrator | Saturday 03 May 2025 00:46:47 +0000 (0:00:07.584) 0:00:58.431 ********** 2025-05-03 00:47:06.034222 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-03 00:47:06.034229 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:47:06.034238 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-03 00:47:06.034245 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:47:06.034253 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-03 00:47:06.034260 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:47:06.034267 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-03 00:47:06.034274 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-03 00:47:06.034282 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-03 00:47:06.034289 | orchestrator | 2025-05-03 00:47:06.034297 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-03 00:47:06.034306 | orchestrator | Saturday 03 May 2025 00:46:50 +0000 (0:00:02.519) 0:01:00.951 ********** 2025-05-03 00:47:06.034314 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-03 00:47:06.034322 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:47:06.034330 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-03 00:47:06.034338 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:47:06.034346 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-03 00:47:06.034354 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:47:06.034363 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-03 00:47:06.034380 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-03 00:47:06.034652 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-03 00:47:06.034855 | orchestrator | 2025-05-03 00:47:06.034879 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-03 00:47:06.034895 | orchestrator | Saturday 03 May 2025 00:46:55 +0000 (0:00:04.768) 0:01:05.719 ********** 2025-05-03 00:47:06.034909 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:47:06.034924 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:47:06.034939 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:47:06.034953 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:47:06.034967 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:47:06.034981 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:47:06.034995 | orchestrator | 2025-05-03 00:47:06.035009 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:47:06.035025 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:47:06.035042 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:47:06.035056 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:47:06.035070 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:47:06.035118 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:47:06.035150 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:47:06.035165 | orchestrator | 2025-05-03 00:47:06.035181 | orchestrator | 2025-05-03 00:47:06.035198 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:47:06.035214 | orchestrator | Saturday 03 May 2025 00:47:03 +0000 (0:00:08.447) 0:01:14.167 ********** 2025-05-03 00:47:06.035230 | orchestrator | =============================================================================== 2025-05-03 00:47:06.035246 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.13s 2025-05-03 00:47:06.035262 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.82s 2025-05-03 00:47:06.035279 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.58s 2025-05-03 00:47:06.035295 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.77s 2025-05-03 00:47:06.035311 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 4.26s 2025-05-03 00:47:06.035327 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.75s 2025-05-03 00:47:06.035343 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.56s 2025-05-03 00:47:06.035358 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.52s 2025-05-03 00:47:06.035372 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.50s 2025-05-03 00:47:06.035386 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.22s 2025-05-03 00:47:06.035405 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.22s 2025-05-03 00:47:06.035419 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.20s 2025-05-03 00:47:06.035434 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.94s 2025-05-03 00:47:06.035448 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.61s 2025-05-03 00:47:06.035462 | orchestrator | module-load : Load modules ---------------------------------------------- 1.41s 2025-05-03 00:47:06.035476 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.17s 2025-05-03 00:47:06.035490 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.15s 2025-05-03 00:47:06.035504 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.02s 2025-05-03 00:47:06.035518 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-05-03 00:47:06.035532 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2025-05-03 00:47:06.035545 | orchestrator | 2025-05-03 00:47:06 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:06.035577 | orchestrator | 2025-05-03 00:47:06 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:09.075063 | orchestrator | 2025-05-03 00:47:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:09.075209 | orchestrator | 2025-05-03 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:09.075241 | orchestrator | 2025-05-03 00:47:09 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:09.075256 | orchestrator | 2025-05-03 00:47:09 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:09.076328 | orchestrator | 2025-05-03 00:47:09 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:09.077797 | orchestrator | 2025-05-03 00:47:09 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:09.078460 | orchestrator | 2025-05-03 00:47:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:12.114275 | orchestrator | 2025-05-03 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:12.114399 | orchestrator | 2025-05-03 00:47:12 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:12.114436 | orchestrator | 2025-05-03 00:47:12 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:12.117781 | orchestrator | 2025-05-03 00:47:12 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:12.118228 | orchestrator | 2025-05-03 00:47:12 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:12.118814 | orchestrator | 2025-05-03 00:47:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:15.155383 | orchestrator | 2025-05-03 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:15.155505 | orchestrator | 2025-05-03 00:47:15 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:15.156275 | orchestrator | 2025-05-03 00:47:15 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:15.156883 | orchestrator | 2025-05-03 00:47:15 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:15.157533 | orchestrator | 2025-05-03 00:47:15 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:15.159516 | orchestrator | 2025-05-03 00:47:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:18.187956 | orchestrator | 2025-05-03 00:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:18.188088 | orchestrator | 2025-05-03 00:47:18 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:18.190221 | orchestrator | 2025-05-03 00:47:18 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:18.191166 | orchestrator | 2025-05-03 00:47:18 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:18.193513 | orchestrator | 2025-05-03 00:47:18 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:18.194639 | orchestrator | 2025-05-03 00:47:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:21.227285 | orchestrator | 2025-05-03 00:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:21.227407 | orchestrator | 2025-05-03 00:47:21 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:21.227492 | orchestrator | 2025-05-03 00:47:21 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:21.227946 | orchestrator | 2025-05-03 00:47:21 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:21.229812 | orchestrator | 2025-05-03 00:47:21 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:21.230289 | orchestrator | 2025-05-03 00:47:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:24.261944 | orchestrator | 2025-05-03 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:24.262122 | orchestrator | 2025-05-03 00:47:24 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:24.262566 | orchestrator | 2025-05-03 00:47:24 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:24.262748 | orchestrator | 2025-05-03 00:47:24 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:24.262800 | orchestrator | 2025-05-03 00:47:24 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:24.263846 | orchestrator | 2025-05-03 00:47:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:27.294954 | orchestrator | 2025-05-03 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:27.295074 | orchestrator | 2025-05-03 00:47:27 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:27.297392 | orchestrator | 2025-05-03 00:47:27 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:27.298902 | orchestrator | 2025-05-03 00:47:27 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:30.336862 | orchestrator | 2025-05-03 00:47:27 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:30.336987 | orchestrator | 2025-05-03 00:47:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:30.337005 | orchestrator | 2025-05-03 00:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:30.337034 | orchestrator | 2025-05-03 00:47:30 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:30.337732 | orchestrator | 2025-05-03 00:47:30 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:30.338338 | orchestrator | 2025-05-03 00:47:30 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:30.339596 | orchestrator | 2025-05-03 00:47:30 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:30.340223 | orchestrator | 2025-05-03 00:47:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:33.396187 | orchestrator | 2025-05-03 00:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:33.396328 | orchestrator | 2025-05-03 00:47:33 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:33.398594 | orchestrator | 2025-05-03 00:47:33 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:33.398612 | orchestrator | 2025-05-03 00:47:33 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:33.398625 | orchestrator | 2025-05-03 00:47:33 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:33.399050 | orchestrator | 2025-05-03 00:47:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:36.441623 | orchestrator | 2025-05-03 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:36.441852 | orchestrator | 2025-05-03 00:47:36 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:36.442572 | orchestrator | 2025-05-03 00:47:36 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:36.445807 | orchestrator | 2025-05-03 00:47:36 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:36.446956 | orchestrator | 2025-05-03 00:47:36 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:36.448306 | orchestrator | 2025-05-03 00:47:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:36.448894 | orchestrator | 2025-05-03 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:39.511355 | orchestrator | 2025-05-03 00:47:39 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:39.511981 | orchestrator | 2025-05-03 00:47:39 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:39.512029 | orchestrator | 2025-05-03 00:47:39 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:39.513089 | orchestrator | 2025-05-03 00:47:39 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:39.514379 | orchestrator | 2025-05-03 00:47:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:39.514533 | orchestrator | 2025-05-03 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:42.557267 | orchestrator | 2025-05-03 00:47:42 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:42.557759 | orchestrator | 2025-05-03 00:47:42 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:42.558987 | orchestrator | 2025-05-03 00:47:42 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:42.559594 | orchestrator | 2025-05-03 00:47:42 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:42.560418 | orchestrator | 2025-05-03 00:47:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:45.602353 | orchestrator | 2025-05-03 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:45.602530 | orchestrator | 2025-05-03 00:47:45 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:45.603416 | orchestrator | 2025-05-03 00:47:45 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:45.605576 | orchestrator | 2025-05-03 00:47:45 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:45.607039 | orchestrator | 2025-05-03 00:47:45 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:45.608274 | orchestrator | 2025-05-03 00:47:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:48.659097 | orchestrator | 2025-05-03 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:48.659277 | orchestrator | 2025-05-03 00:47:48 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:48.661380 | orchestrator | 2025-05-03 00:47:48 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:48.662626 | orchestrator | 2025-05-03 00:47:48 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:48.665225 | orchestrator | 2025-05-03 00:47:48 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:48.667357 | orchestrator | 2025-05-03 00:47:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:48.670134 | orchestrator | 2025-05-03 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:51.720241 | orchestrator | 2025-05-03 00:47:51 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:51.721640 | orchestrator | 2025-05-03 00:47:51 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:51.723093 | orchestrator | 2025-05-03 00:47:51 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:51.724477 | orchestrator | 2025-05-03 00:47:51 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:51.726291 | orchestrator | 2025-05-03 00:47:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:51.726805 | orchestrator | 2025-05-03 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:54.785807 | orchestrator | 2025-05-03 00:47:54 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:54.788336 | orchestrator | 2025-05-03 00:47:54 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:54.790163 | orchestrator | 2025-05-03 00:47:54 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:54.791999 | orchestrator | 2025-05-03 00:47:54 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:54.793608 | orchestrator | 2025-05-03 00:47:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:47:54.793974 | orchestrator | 2025-05-03 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:47:57.836828 | orchestrator | 2025-05-03 00:47:57 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:47:57.837083 | orchestrator | 2025-05-03 00:47:57 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:47:57.837114 | orchestrator | 2025-05-03 00:47:57 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:47:57.837806 | orchestrator | 2025-05-03 00:47:57 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:47:57.838441 | orchestrator | 2025-05-03 00:47:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:00.880964 | orchestrator | 2025-05-03 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:00.881106 | orchestrator | 2025-05-03 00:48:00 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:00.881587 | orchestrator | 2025-05-03 00:48:00 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:00.881622 | orchestrator | 2025-05-03 00:48:00 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:00.882121 | orchestrator | 2025-05-03 00:48:00 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:00.882574 | orchestrator | 2025-05-03 00:48:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:03.912871 | orchestrator | 2025-05-03 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:03.913009 | orchestrator | 2025-05-03 00:48:03 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:03.913608 | orchestrator | 2025-05-03 00:48:03 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:03.913645 | orchestrator | 2025-05-03 00:48:03 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:03.914118 | orchestrator | 2025-05-03 00:48:03 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:03.914822 | orchestrator | 2025-05-03 00:48:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:06.946544 | orchestrator | 2025-05-03 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:06.946723 | orchestrator | 2025-05-03 00:48:06 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:06.947861 | orchestrator | 2025-05-03 00:48:06 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:06.947901 | orchestrator | 2025-05-03 00:48:06 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:06.948314 | orchestrator | 2025-05-03 00:48:06 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:06.948855 | orchestrator | 2025-05-03 00:48:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:06.949164 | orchestrator | 2025-05-03 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:09.976353 | orchestrator | 2025-05-03 00:48:09 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:09.977501 | orchestrator | 2025-05-03 00:48:09 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:09.979148 | orchestrator | 2025-05-03 00:48:09 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:09.980418 | orchestrator | 2025-05-03 00:48:09 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:09.981784 | orchestrator | 2025-05-03 00:48:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:13.023492 | orchestrator | 2025-05-03 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:13.023627 | orchestrator | 2025-05-03 00:48:13 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:13.025190 | orchestrator | 2025-05-03 00:48:13 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:13.026206 | orchestrator | 2025-05-03 00:48:13 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:13.026894 | orchestrator | 2025-05-03 00:48:13 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:13.028040 | orchestrator | 2025-05-03 00:48:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:16.084211 | orchestrator | 2025-05-03 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:16.084380 | orchestrator | 2025-05-03 00:48:16 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:16.088211 | orchestrator | 2025-05-03 00:48:16 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:16.089598 | orchestrator | 2025-05-03 00:48:16 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:16.089626 | orchestrator | 2025-05-03 00:48:16 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:16.089697 | orchestrator | 2025-05-03 00:48:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:16.090519 | orchestrator | 2025-05-03 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:19.148057 | orchestrator | 2025-05-03 00:48:19 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:19.148617 | orchestrator | 2025-05-03 00:48:19 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:19.151535 | orchestrator | 2025-05-03 00:48:19 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:19.153996 | orchestrator | 2025-05-03 00:48:19 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:19.158094 | orchestrator | 2025-05-03 00:48:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:22.206576 | orchestrator | 2025-05-03 00:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:22.206822 | orchestrator | 2025-05-03 00:48:22 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:22.207159 | orchestrator | 2025-05-03 00:48:22 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:22.211234 | orchestrator | 2025-05-03 00:48:22 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:22.215714 | orchestrator | 2025-05-03 00:48:22 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:22.216546 | orchestrator | 2025-05-03 00:48:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:22.216914 | orchestrator | 2025-05-03 00:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:25.270384 | orchestrator | 2025-05-03 00:48:25 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:25.271502 | orchestrator | 2025-05-03 00:48:25 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:25.271546 | orchestrator | 2025-05-03 00:48:25 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state STARTED 2025-05-03 00:48:25.272235 | orchestrator | 2025-05-03 00:48:25 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:25.273278 | orchestrator | 2025-05-03 00:48:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:25.273385 | orchestrator | 2025-05-03 00:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:28.344075 | orchestrator | 2025-05-03 00:48:28 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:28.345898 | orchestrator | 2025-05-03 00:48:28 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:28.345946 | orchestrator | 2025-05-03 00:48:28 | INFO  | Task 66bbf261-d1c1-4149-bca4-9dd405bf4404 is in state SUCCESS 2025-05-03 00:48:28.347804 | orchestrator | 2025-05-03 00:48:28.347850 | orchestrator | 2025-05-03 00:48:28.347866 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-03 00:48:28.347882 | orchestrator | 2025-05-03 00:48:28.347896 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-03 00:48:28.347911 | orchestrator | Saturday 03 May 2025 00:46:13 +0000 (0:00:00.124) 0:00:00.124 ********** 2025-05-03 00:48:28.347926 | orchestrator | ok: [localhost] => { 2025-05-03 00:48:28.347968 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-03 00:48:28.347986 | orchestrator | } 2025-05-03 00:48:28.348001 | orchestrator | 2025-05-03 00:48:28.348016 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-03 00:48:28.348031 | orchestrator | Saturday 03 May 2025 00:46:13 +0000 (0:00:00.060) 0:00:00.184 ********** 2025-05-03 00:48:28.348047 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-03 00:48:28.348063 | orchestrator | ...ignoring 2025-05-03 00:48:28.348078 | orchestrator | 2025-05-03 00:48:28.348094 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-03 00:48:28.348109 | orchestrator | Saturday 03 May 2025 00:46:15 +0000 (0:00:02.589) 0:00:02.773 ********** 2025-05-03 00:48:28.348124 | orchestrator | skipping: [localhost] 2025-05-03 00:48:28.348139 | orchestrator | 2025-05-03 00:48:28.348154 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-03 00:48:28.348168 | orchestrator | Saturday 03 May 2025 00:46:15 +0000 (0:00:00.041) 0:00:02.815 ********** 2025-05-03 00:48:28.348183 | orchestrator | ok: [localhost] 2025-05-03 00:48:28.348198 | orchestrator | 2025-05-03 00:48:28.348214 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:48:28.348229 | orchestrator | 2025-05-03 00:48:28.348264 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:48:28.348280 | orchestrator | Saturday 03 May 2025 00:46:15 +0000 (0:00:00.131) 0:00:02.946 ********** 2025-05-03 00:48:28.348294 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:48:28.348310 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:48:28.348324 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:48:28.348339 | orchestrator | 2025-05-03 00:48:28.348354 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:48:28.348369 | orchestrator | Saturday 03 May 2025 00:46:16 +0000 (0:00:00.350) 0:00:03.297 ********** 2025-05-03 00:48:28.348384 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-03 00:48:28.348402 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-03 00:48:28.348418 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-03 00:48:28.348435 | orchestrator | 2025-05-03 00:48:28.348451 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-03 00:48:28.348467 | orchestrator | 2025-05-03 00:48:28.348484 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-03 00:48:28.348500 | orchestrator | Saturday 03 May 2025 00:46:16 +0000 (0:00:00.373) 0:00:03.670 ********** 2025-05-03 00:48:28.348516 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:48:28.348534 | orchestrator | 2025-05-03 00:48:28.348550 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-03 00:48:28.348565 | orchestrator | Saturday 03 May 2025 00:46:17 +0000 (0:00:00.608) 0:00:04.279 ********** 2025-05-03 00:48:28.348579 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:48:28.348594 | orchestrator | 2025-05-03 00:48:28.348609 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-03 00:48:28.348623 | orchestrator | Saturday 03 May 2025 00:46:18 +0000 (0:00:01.027) 0:00:05.307 ********** 2025-05-03 00:48:28.348674 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:48:28.348690 | orchestrator | 2025-05-03 00:48:28.348704 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-03 00:48:28.348726 | orchestrator | Saturday 03 May 2025 00:46:18 +0000 (0:00:00.640) 0:00:05.947 ********** 2025-05-03 00:48:28.348740 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:48:28.348754 | orchestrator | 2025-05-03 00:48:28.348768 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-03 00:48:28.348782 | orchestrator | Saturday 03 May 2025 00:46:19 +0000 (0:00:00.604) 0:00:06.551 ********** 2025-05-03 00:48:28.348796 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:48:28.348810 | orchestrator | 2025-05-03 00:48:28.348824 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-03 00:48:28.348837 | orchestrator | Saturday 03 May 2025 00:46:19 +0000 (0:00:00.334) 0:00:06.885 ********** 2025-05-03 00:48:28.348851 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:48:28.348865 | orchestrator | 2025-05-03 00:48:28.348879 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-03 00:48:28.348893 | orchestrator | Saturday 03 May 2025 00:46:20 +0000 (0:00:00.330) 0:00:07.216 ********** 2025-05-03 00:48:28.348907 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:48:28.348921 | orchestrator | 2025-05-03 00:48:28.348934 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-03 00:48:28.348948 | orchestrator | Saturday 03 May 2025 00:46:20 +0000 (0:00:00.779) 0:00:07.996 ********** 2025-05-03 00:48:28.348962 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:48:28.348976 | orchestrator | 2025-05-03 00:48:28.348990 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-03 00:48:28.349004 | orchestrator | Saturday 03 May 2025 00:46:21 +0000 (0:00:00.892) 0:00:08.888 ********** 2025-05-03 00:48:28.349018 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:48:28.349032 | orchestrator | 2025-05-03 00:48:28.349052 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-03 00:48:28.349066 | orchestrator | Saturday 03 May 2025 00:46:22 +0000 (0:00:00.308) 0:00:09.197 ********** 2025-05-03 00:48:28.349080 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:48:28.349094 | orchestrator | 2025-05-03 00:48:28.349116 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-03 00:48:28.349130 | orchestrator | Saturday 03 May 2025 00:46:22 +0000 (0:00:00.299) 0:00:09.496 ********** 2025-05-03 00:48:28.349168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.349188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.349203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.349218 | orchestrator | 2025-05-03 00:48:28.349232 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-03 00:48:28.349246 | orchestrator | Saturday 03 May 2025 00:46:23 +0000 (0:00:00.880) 0:00:10.376 ********** 2025-05-03 00:48:28.349286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.349303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.349319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.349333 | orchestrator | 2025-05-03 00:48:28.349348 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-03 00:48:28.349362 | orchestrator | Saturday 03 May 2025 00:46:24 +0000 (0:00:01.653) 0:00:12.030 ********** 2025-05-03 00:48:28.349376 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-03 00:48:28.349390 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-03 00:48:28.349404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-03 00:48:28.349418 | orchestrator | 2025-05-03 00:48:28.349432 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-03 00:48:28.349446 | orchestrator | Saturday 03 May 2025 00:46:27 +0000 (0:00:02.881) 0:00:14.912 ********** 2025-05-03 00:48:28.349466 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-03 00:48:28.349480 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-03 00:48:28.349494 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-03 00:48:28.349508 | orchestrator | 2025-05-03 00:48:28.349522 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-03 00:48:28.349536 | orchestrator | Saturday 03 May 2025 00:46:30 +0000 (0:00:02.875) 0:00:17.787 ********** 2025-05-03 00:48:28.349550 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-03 00:48:28.349564 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-03 00:48:28.349578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-03 00:48:28.349592 | orchestrator | 2025-05-03 00:48:28.349612 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-03 00:48:28.349628 | orchestrator | Saturday 03 May 2025 00:46:34 +0000 (0:00:03.330) 0:00:21.117 ********** 2025-05-03 00:48:28.349658 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-03 00:48:28.349673 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-03 00:48:28.349687 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-03 00:48:28.349701 | orchestrator | 2025-05-03 00:48:28.349715 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-03 00:48:28.349729 | orchestrator | Saturday 03 May 2025 00:46:35 +0000 (0:00:01.909) 0:00:23.026 ********** 2025-05-03 00:48:28.349743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-03 00:48:28.349757 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-03 00:48:28.349771 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-03 00:48:28.349785 | orchestrator | 2025-05-03 00:48:28.349798 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-03 00:48:28.349817 | orchestrator | Saturday 03 May 2025 00:46:37 +0000 (0:00:01.632) 0:00:24.659 ********** 2025-05-03 00:48:28.349831 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-03 00:48:28.349909 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-03 00:48:28.349925 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-03 00:48:28.349939 | orchestrator | 2025-05-03 00:48:28.349953 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-03 00:48:28.349967 | orchestrator | Saturday 03 May 2025 00:46:39 +0000 (0:00:01.898) 0:00:26.557 ********** 2025-05-03 00:48:28.349980 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:48:28.349994 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:48:28.350008 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:48:28.350102 | orchestrator | 2025-05-03 00:48:28.350118 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-03 00:48:28.350132 | orchestrator | Saturday 03 May 2025 00:46:40 +0000 (0:00:00.844) 0:00:27.402 ********** 2025-05-03 00:48:28.350147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.350172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.350199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:48:28.350215 | orchestrator | 2025-05-03 00:48:28.350229 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-03 00:48:28.350243 | orchestrator | Saturday 03 May 2025 00:46:42 +0000 (0:00:01.895) 0:00:29.297 ********** 2025-05-03 00:48:28.350257 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:48:28.350270 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:48:28.350284 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:48:28.350298 | orchestrator | 2025-05-03 00:48:28.350312 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-03 00:48:28.350325 | orchestrator | Saturday 03 May 2025 00:46:43 +0000 (0:00:01.077) 0:00:30.375 ********** 2025-05-03 00:48:28.350339 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:48:28.350353 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:48:28.350367 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:48:28.350380 | orchestrator | 2025-05-03 00:48:28.350394 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-03 00:48:28.350408 | orchestrator | Saturday 03 May 2025 00:46:49 +0000 (0:00:06.321) 0:00:36.696 ********** 2025-05-03 00:48:28.350422 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:48:28.350442 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:48:28.350456 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:48:28.350470 | orchestrator | 2025-05-03 00:48:28.350484 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-03 00:48:28.350497 | orchestrator | 2025-05-03 00:48:28.350512 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-03 00:48:28.350525 | orchestrator | Saturday 03 May 2025 00:46:50 +0000 (0:00:00.565) 0:00:37.262 ********** 2025-05-03 00:48:28.350591 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:48:28.350608 | orchestrator | 2025-05-03 00:48:28.350622 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-03 00:48:28.350652 | orchestrator | Saturday 03 May 2025 00:46:51 +0000 (0:00:00.868) 0:00:38.130 ********** 2025-05-03 00:48:28.350666 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:48:28.350680 | orchestrator | 2025-05-03 00:48:28.350694 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-03 00:48:28.350708 | orchestrator | Saturday 03 May 2025 00:46:51 +0000 (0:00:00.241) 0:00:38.371 ********** 2025-05-03 00:48:28.350722 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:48:28.350736 | orchestrator | 2025-05-03 00:48:28.350749 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-03 00:48:28.350763 | orchestrator | Saturday 03 May 2025 00:46:53 +0000 (0:00:01.722) 0:00:40.094 ********** 2025-05-03 00:48:28.350777 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:48:28.350791 | orchestrator | 2025-05-03 00:48:28.350805 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-03 00:48:28.350819 | orchestrator | 2025-05-03 00:48:28.350832 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-03 00:48:28.350846 | orchestrator | Saturday 03 May 2025 00:47:45 +0000 (0:00:52.805) 0:01:32.899 ********** 2025-05-03 00:48:28.350860 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:48:28.350874 | orchestrator | 2025-05-03 00:48:28.350887 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-03 00:48:28.350901 | orchestrator | Saturday 03 May 2025 00:47:46 +0000 (0:00:00.584) 0:01:33.483 ********** 2025-05-03 00:48:28.350915 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:48:28.350929 | orchestrator | 2025-05-03 00:48:28.350943 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-03 00:48:28.350957 | orchestrator | Saturday 03 May 2025 00:47:46 +0000 (0:00:00.314) 0:01:33.797 ********** 2025-05-03 00:48:28.350971 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:48:28.350984 | orchestrator | 2025-05-03 00:48:28.350998 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-03 00:48:28.351012 | orchestrator | Saturday 03 May 2025 00:47:48 +0000 (0:00:01.997) 0:01:35.795 ********** 2025-05-03 00:48:28.351026 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:48:28.351040 | orchestrator | 2025-05-03 00:48:28.351054 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-03 00:48:28.351068 | orchestrator | 2025-05-03 00:48:28.351082 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-03 00:48:28.351095 | orchestrator | Saturday 03 May 2025 00:48:04 +0000 (0:00:15.751) 0:01:51.547 ********** 2025-05-03 00:48:28.351109 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:48:28.351123 | orchestrator | 2025-05-03 00:48:28.351143 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-03 00:48:28.351157 | orchestrator | Saturday 03 May 2025 00:48:05 +0000 (0:00:00.713) 0:01:52.260 ********** 2025-05-03 00:48:28.351171 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:48:28.351189 | orchestrator | 2025-05-03 00:48:28.351204 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-03 00:48:28.351225 | orchestrator | Saturday 03 May 2025 00:48:05 +0000 (0:00:00.239) 0:01:52.500 ********** 2025-05-03 00:48:28.351352 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:48:28.351382 | orchestrator | 2025-05-03 00:48:28.351405 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-03 00:48:28.351442 | orchestrator | Saturday 03 May 2025 00:48:07 +0000 (0:00:01.973) 0:01:54.473 ********** 2025-05-03 00:48:28.351467 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:48:28.351492 | orchestrator | 2025-05-03 00:48:28.351511 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-03 00:48:28.351525 | orchestrator | 2025-05-03 00:48:28.351539 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-03 00:48:28.351553 | orchestrator | Saturday 03 May 2025 00:48:21 +0000 (0:00:14.515) 0:02:08.989 ********** 2025-05-03 00:48:28.351567 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:48:28.351581 | orchestrator | 2025-05-03 00:48:28.351595 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-03 00:48:28.351609 | orchestrator | Saturday 03 May 2025 00:48:22 +0000 (0:00:00.903) 0:02:09.893 ********** 2025-05-03 00:48:28.351623 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-03 00:48:28.351656 | orchestrator | enable_outward_rabbitmq_True 2025-05-03 00:48:28.351671 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-03 00:48:28.351684 | orchestrator | outward_rabbitmq_restart 2025-05-03 00:48:28.351698 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:48:28.351712 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:48:28.351726 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:48:28.351740 | orchestrator | 2025-05-03 00:48:28.351754 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-03 00:48:28.351768 | orchestrator | skipping: no hosts matched 2025-05-03 00:48:28.351782 | orchestrator | 2025-05-03 00:48:28.351796 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-03 00:48:28.351809 | orchestrator | skipping: no hosts matched 2025-05-03 00:48:28.351823 | orchestrator | 2025-05-03 00:48:28.351837 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-03 00:48:28.351851 | orchestrator | skipping: no hosts matched 2025-05-03 00:48:28.351865 | orchestrator | 2025-05-03 00:48:28.351879 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:48:28.351893 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-03 00:48:28.351908 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-03 00:48:28.351921 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:48:28.351936 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 00:48:28.351950 | orchestrator | 2025-05-03 00:48:28.351966 | orchestrator | 2025-05-03 00:48:28.351982 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:48:28.351998 | orchestrator | Saturday 03 May 2025 00:48:25 +0000 (0:00:02.675) 0:02:12.568 ********** 2025-05-03 00:48:28.352013 | orchestrator | =============================================================================== 2025-05-03 00:48:28.352029 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.07s 2025-05-03 00:48:28.352044 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.32s 2025-05-03 00:48:28.352060 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.69s 2025-05-03 00:48:28.352075 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.33s 2025-05-03 00:48:28.352091 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.88s 2025-05-03 00:48:28.352106 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.88s 2025-05-03 00:48:28.352130 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.68s 2025-05-03 00:48:28.352146 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.59s 2025-05-03 00:48:28.352161 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.17s 2025-05-03 00:48:28.352177 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.91s 2025-05-03 00:48:28.352193 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.90s 2025-05-03 00:48:28.352209 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.90s 2025-05-03 00:48:28.352225 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.65s 2025-05-03 00:48:28.352246 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.63s 2025-05-03 00:48:28.352263 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.08s 2025-05-03 00:48:28.352279 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2025-05-03 00:48:28.352294 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.90s 2025-05-03 00:48:28.352308 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.89s 2025-05-03 00:48:28.352322 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.88s 2025-05-03 00:48:28.352336 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.84s 2025-05-03 00:48:28.352356 | orchestrator | 2025-05-03 00:48:28 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:31.387304 | orchestrator | 2025-05-03 00:48:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:31.387463 | orchestrator | 2025-05-03 00:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:31.387502 | orchestrator | 2025-05-03 00:48:31 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:31.387904 | orchestrator | 2025-05-03 00:48:31 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:31.389516 | orchestrator | 2025-05-03 00:48:31 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:31.391262 | orchestrator | 2025-05-03 00:48:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:34.438836 | orchestrator | 2025-05-03 00:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:34.439079 | orchestrator | 2025-05-03 00:48:34 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:34.439174 | orchestrator | 2025-05-03 00:48:34 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:34.441366 | orchestrator | 2025-05-03 00:48:34 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:34.443356 | orchestrator | 2025-05-03 00:48:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:37.490938 | orchestrator | 2025-05-03 00:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:37.491109 | orchestrator | 2025-05-03 00:48:37 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:37.493197 | orchestrator | 2025-05-03 00:48:37 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:37.495895 | orchestrator | 2025-05-03 00:48:37 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:37.497364 | orchestrator | 2025-05-03 00:48:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:40.561948 | orchestrator | 2025-05-03 00:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:40.562197 | orchestrator | 2025-05-03 00:48:40 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:40.563522 | orchestrator | 2025-05-03 00:48:40 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:40.565282 | orchestrator | 2025-05-03 00:48:40 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:40.566827 | orchestrator | 2025-05-03 00:48:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:43.616219 | orchestrator | 2025-05-03 00:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:43.616357 | orchestrator | 2025-05-03 00:48:43 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:43.617040 | orchestrator | 2025-05-03 00:48:43 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:43.620983 | orchestrator | 2025-05-03 00:48:43 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:43.622246 | orchestrator | 2025-05-03 00:48:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:46.672461 | orchestrator | 2025-05-03 00:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:46.672603 | orchestrator | 2025-05-03 00:48:46 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:46.673768 | orchestrator | 2025-05-03 00:48:46 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:46.675184 | orchestrator | 2025-05-03 00:48:46 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:46.676370 | orchestrator | 2025-05-03 00:48:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:49.732399 | orchestrator | 2025-05-03 00:48:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:49.732568 | orchestrator | 2025-05-03 00:48:49 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:49.734189 | orchestrator | 2025-05-03 00:48:49 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:49.735951 | orchestrator | 2025-05-03 00:48:49 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:49.738151 | orchestrator | 2025-05-03 00:48:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:52.816428 | orchestrator | 2025-05-03 00:48:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:52.816580 | orchestrator | 2025-05-03 00:48:52 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:52.826906 | orchestrator | 2025-05-03 00:48:52 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:52.828530 | orchestrator | 2025-05-03 00:48:52 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:52.831167 | orchestrator | 2025-05-03 00:48:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:55.868290 | orchestrator | 2025-05-03 00:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:55.868425 | orchestrator | 2025-05-03 00:48:55 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:55.872424 | orchestrator | 2025-05-03 00:48:55 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:55.874356 | orchestrator | 2025-05-03 00:48:55 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:55.874472 | orchestrator | 2025-05-03 00:48:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:48:58.920029 | orchestrator | 2025-05-03 00:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:48:58.920210 | orchestrator | 2025-05-03 00:48:58 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:48:58.922693 | orchestrator | 2025-05-03 00:48:58 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:48:58.925669 | orchestrator | 2025-05-03 00:48:58 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:48:58.928972 | orchestrator | 2025-05-03 00:48:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:01.970213 | orchestrator | 2025-05-03 00:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:01.970356 | orchestrator | 2025-05-03 00:49:01 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:01.971466 | orchestrator | 2025-05-03 00:49:01 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:01.972588 | orchestrator | 2025-05-03 00:49:01 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:01.973633 | orchestrator | 2025-05-03 00:49:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:05.008191 | orchestrator | 2025-05-03 00:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:05.008339 | orchestrator | 2025-05-03 00:49:05 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:05.010878 | orchestrator | 2025-05-03 00:49:05 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:05.012317 | orchestrator | 2025-05-03 00:49:05 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:05.017118 | orchestrator | 2025-05-03 00:49:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:08.062945 | orchestrator | 2025-05-03 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:08.063100 | orchestrator | 2025-05-03 00:49:08 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:08.065104 | orchestrator | 2025-05-03 00:49:08 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:08.072197 | orchestrator | 2025-05-03 00:49:08 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:11.119870 | orchestrator | 2025-05-03 00:49:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:11.120146 | orchestrator | 2025-05-03 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:11.120195 | orchestrator | 2025-05-03 00:49:11 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:11.120288 | orchestrator | 2025-05-03 00:49:11 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:11.121692 | orchestrator | 2025-05-03 00:49:11 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:11.123353 | orchestrator | 2025-05-03 00:49:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:11.124879 | orchestrator | 2025-05-03 00:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:14.162478 | orchestrator | 2025-05-03 00:49:14 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:14.162760 | orchestrator | 2025-05-03 00:49:14 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:14.166149 | orchestrator | 2025-05-03 00:49:14 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:14.166328 | orchestrator | 2025-05-03 00:49:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:17.216385 | orchestrator | 2025-05-03 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:17.216528 | orchestrator | 2025-05-03 00:49:17 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:17.218467 | orchestrator | 2025-05-03 00:49:17 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:17.224706 | orchestrator | 2025-05-03 00:49:17 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:17.226653 | orchestrator | 2025-05-03 00:49:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:20.273241 | orchestrator | 2025-05-03 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:20.273374 | orchestrator | 2025-05-03 00:49:20 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:20.275351 | orchestrator | 2025-05-03 00:49:20 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:20.276568 | orchestrator | 2025-05-03 00:49:20 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:20.280475 | orchestrator | 2025-05-03 00:49:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:20.280669 | orchestrator | 2025-05-03 00:49:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:23.310410 | orchestrator | 2025-05-03 00:49:23 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:23.310836 | orchestrator | 2025-05-03 00:49:23 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:23.311534 | orchestrator | 2025-05-03 00:49:23 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:23.312265 | orchestrator | 2025-05-03 00:49:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:26.347813 | orchestrator | 2025-05-03 00:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:26.347953 | orchestrator | 2025-05-03 00:49:26 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:26.353175 | orchestrator | 2025-05-03 00:49:26 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:26.354545 | orchestrator | 2025-05-03 00:49:26 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:26.354682 | orchestrator | 2025-05-03 00:49:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:29.406301 | orchestrator | 2025-05-03 00:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:29.406451 | orchestrator | 2025-05-03 00:49:29 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:29.406866 | orchestrator | 2025-05-03 00:49:29 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:29.408408 | orchestrator | 2025-05-03 00:49:29 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:29.410011 | orchestrator | 2025-05-03 00:49:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:29.410279 | orchestrator | 2025-05-03 00:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:32.455455 | orchestrator | 2025-05-03 00:49:32 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state STARTED 2025-05-03 00:49:32.461829 | orchestrator | 2025-05-03 00:49:32 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:32.462211 | orchestrator | 2025-05-03 00:49:32 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:32.462892 | orchestrator | 2025-05-03 00:49:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:35.498471 | orchestrator | 2025-05-03 00:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:35.498860 | orchestrator | 2025-05-03 00:49:35.498897 | orchestrator | 2025-05-03 00:49:35.498921 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:49:35.498943 | orchestrator | 2025-05-03 00:49:35.498965 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:49:35.498987 | orchestrator | Saturday 03 May 2025 00:47:07 +0000 (0:00:00.243) 0:00:00.243 ********** 2025-05-03 00:49:35.499009 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.499032 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.499054 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.499075 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:49:35.499097 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:49:35.499119 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:49:35.499140 | orchestrator | 2025-05-03 00:49:35.499162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:49:35.499184 | orchestrator | Saturday 03 May 2025 00:47:08 +0000 (0:00:00.757) 0:00:01.000 ********** 2025-05-03 00:49:35.499205 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-03 00:49:35.499227 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-03 00:49:35.499249 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-03 00:49:35.499271 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-03 00:49:35.499292 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-03 00:49:35.499315 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-03 00:49:35.499336 | orchestrator | 2025-05-03 00:49:35.499358 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-03 00:49:35.499414 | orchestrator | 2025-05-03 00:49:35.499438 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-03 00:49:35.499460 | orchestrator | Saturday 03 May 2025 00:47:09 +0000 (0:00:01.386) 0:00:02.386 ********** 2025-05-03 00:49:35.499482 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:49:35.499503 | orchestrator | 2025-05-03 00:49:35.499524 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-03 00:49:35.499543 | orchestrator | Saturday 03 May 2025 00:47:11 +0000 (0:00:02.106) 0:00:04.493 ********** 2025-05-03 00:49:35.499586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499773 | orchestrator | 2025-05-03 00:49:35.499792 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-03 00:49:35.499812 | orchestrator | Saturday 03 May 2025 00:47:13 +0000 (0:00:01.470) 0:00:05.963 ********** 2025-05-03 00:49:35.499840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.499972 | orchestrator | 2025-05-03 00:49:35.499992 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-03 00:49:35.500012 | orchestrator | Saturday 03 May 2025 00:47:14 +0000 (0:00:01.695) 0:00:07.660 ********** 2025-05-03 00:49:35.500033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500175 | orchestrator | 2025-05-03 00:49:35.500196 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-03 00:49:35.500216 | orchestrator | Saturday 03 May 2025 00:47:16 +0000 (0:00:01.190) 0:00:08.850 ********** 2025-05-03 00:49:35.500297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500448 | orchestrator | 2025-05-03 00:49:35.500470 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-03 00:49:35.500492 | orchestrator | Saturday 03 May 2025 00:47:17 +0000 (0:00:01.656) 0:00:10.507 ********** 2025-05-03 00:49:35.500514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.500690 | orchestrator | 2025-05-03 00:49:35.500708 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-03 00:49:35.500726 | orchestrator | Saturday 03 May 2025 00:47:19 +0000 (0:00:01.620) 0:00:12.127 ********** 2025-05-03 00:49:35.500745 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.500766 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:49:35.500785 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:49:35.500803 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:49:35.500822 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:49:35.500840 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:49:35.500862 | orchestrator | 2025-05-03 00:49:35.500882 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-03 00:49:35.500901 | orchestrator | Saturday 03 May 2025 00:47:22 +0000 (0:00:02.801) 0:00:14.929 ********** 2025-05-03 00:49:35.500919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-03 00:49:35.500938 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-03 00:49:35.500958 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-03 00:49:35.500990 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-03 00:49:35.501017 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-03 00:49:35.501038 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-03 00:49:35.501058 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-03 00:49:35.501076 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-03 00:49:35.501095 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-03 00:49:35.501119 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-03 00:49:35.501138 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-03 00:49:35.501168 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-03 00:49:35.501187 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-03 00:49:35.501208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-03 00:49:35.501229 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-03 00:49:35.501249 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-03 00:49:35.501269 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-03 00:49:35.501290 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-03 00:49:35.501310 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-03 00:49:35.501332 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-03 00:49:35.501350 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-03 00:49:35.501369 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-03 00:49:35.501389 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-03 00:49:35.501409 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-03 00:49:35.501430 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-03 00:49:35.501450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-03 00:49:35.501469 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-03 00:49:35.501488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-03 00:49:35.501507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-03 00:49:35.501527 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-03 00:49:35.501546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-03 00:49:35.501634 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-03 00:49:35.501659 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-03 00:49:35.501679 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-03 00:49:35.501699 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-03 00:49:35.501718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-03 00:49:35.501739 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-03 00:49:35.501759 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-03 00:49:35.501780 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-03 00:49:35.501800 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-03 00:49:35.501849 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-03 00:49:35.501871 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-03 00:49:35.501891 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-03 00:49:35.501912 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-03 00:49:35.501932 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-03 00:49:35.501953 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-03 00:49:35.501973 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-03 00:49:35.501993 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-03 00:49:35.502013 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-03 00:49:35.502102 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-03 00:49:35.502125 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-03 00:49:35.502146 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-03 00:49:35.502166 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-03 00:49:35.502186 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-03 00:49:35.502206 | orchestrator | 2025-05-03 00:49:35.502226 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-03 00:49:35.502246 | orchestrator | Saturday 03 May 2025 00:47:40 +0000 (0:00:18.157) 0:00:33.086 ********** 2025-05-03 00:49:35.502266 | orchestrator | 2025-05-03 00:49:35.502286 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-03 00:49:35.502306 | orchestrator | Saturday 03 May 2025 00:47:40 +0000 (0:00:00.068) 0:00:33.154 ********** 2025-05-03 00:49:35.502326 | orchestrator | 2025-05-03 00:49:35.502347 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-03 00:49:35.502366 | orchestrator | Saturday 03 May 2025 00:47:40 +0000 (0:00:00.234) 0:00:33.389 ********** 2025-05-03 00:49:35.502386 | orchestrator | 2025-05-03 00:49:35.502406 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-03 00:49:35.502426 | orchestrator | Saturday 03 May 2025 00:47:40 +0000 (0:00:00.053) 0:00:33.442 ********** 2025-05-03 00:49:35.502445 | orchestrator | 2025-05-03 00:49:35.502465 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-03 00:49:35.502485 | orchestrator | Saturday 03 May 2025 00:47:40 +0000 (0:00:00.057) 0:00:33.500 ********** 2025-05-03 00:49:35.502506 | orchestrator | 2025-05-03 00:49:35.502525 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-03 00:49:35.502545 | orchestrator | Saturday 03 May 2025 00:47:40 +0000 (0:00:00.055) 0:00:33.556 ********** 2025-05-03 00:49:35.502564 | orchestrator | 2025-05-03 00:49:35.502601 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-03 00:49:35.502616 | orchestrator | Saturday 03 May 2025 00:47:41 +0000 (0:00:00.243) 0:00:33.799 ********** 2025-05-03 00:49:35.502643 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.502662 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:49:35.502679 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.502697 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.502714 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:49:35.502731 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:49:35.502748 | orchestrator | 2025-05-03 00:49:35.502767 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-03 00:49:35.502784 | orchestrator | Saturday 03 May 2025 00:47:43 +0000 (0:00:02.053) 0:00:35.853 ********** 2025-05-03 00:49:35.502803 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.502822 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:49:35.502840 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:49:35.502858 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:49:35.502876 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:49:35.502892 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:49:35.502909 | orchestrator | 2025-05-03 00:49:35.502926 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-03 00:49:35.502944 | orchestrator | 2025-05-03 00:49:35.502962 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-03 00:49:35.502978 | orchestrator | Saturday 03 May 2025 00:48:06 +0000 (0:00:23.123) 0:00:58.977 ********** 2025-05-03 00:49:35.502995 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:49:35.503012 | orchestrator | 2025-05-03 00:49:35.503028 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-03 00:49:35.503044 | orchestrator | Saturday 03 May 2025 00:48:06 +0000 (0:00:00.686) 0:00:59.664 ********** 2025-05-03 00:49:35.503060 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:49:35.503077 | orchestrator | 2025-05-03 00:49:35.503103 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-03 00:49:35.503130 | orchestrator | Saturday 03 May 2025 00:48:07 +0000 (0:00:00.994) 0:01:00.658 ********** 2025-05-03 00:49:35.503149 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.503166 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.503182 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.503199 | orchestrator | 2025-05-03 00:49:35.503214 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-03 00:49:35.503230 | orchestrator | Saturday 03 May 2025 00:48:08 +0000 (0:00:00.780) 0:01:01.438 ********** 2025-05-03 00:49:35.503246 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.503263 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.503279 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.503295 | orchestrator | 2025-05-03 00:49:35.503312 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-03 00:49:35.503328 | orchestrator | Saturday 03 May 2025 00:48:08 +0000 (0:00:00.225) 0:01:01.664 ********** 2025-05-03 00:49:35.503344 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.503361 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.503378 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.503395 | orchestrator | 2025-05-03 00:49:35.503413 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-03 00:49:35.503430 | orchestrator | Saturday 03 May 2025 00:48:09 +0000 (0:00:00.315) 0:01:01.979 ********** 2025-05-03 00:49:35.503447 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.503464 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.503480 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.503496 | orchestrator | 2025-05-03 00:49:35.503514 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-03 00:49:35.503530 | orchestrator | Saturday 03 May 2025 00:48:09 +0000 (0:00:00.354) 0:01:02.334 ********** 2025-05-03 00:49:35.503545 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.503561 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.503652 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.503695 | orchestrator | 2025-05-03 00:49:35.503712 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-03 00:49:35.503729 | orchestrator | Saturday 03 May 2025 00:48:09 +0000 (0:00:00.338) 0:01:02.673 ********** 2025-05-03 00:49:35.503746 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.503764 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.503789 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.503806 | orchestrator | 2025-05-03 00:49:35.503824 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-03 00:49:35.503841 | orchestrator | Saturday 03 May 2025 00:48:10 +0000 (0:00:00.232) 0:01:02.906 ********** 2025-05-03 00:49:35.503859 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.503877 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.503896 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.503913 | orchestrator | 2025-05-03 00:49:35.503931 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-03 00:49:35.503950 | orchestrator | Saturday 03 May 2025 00:48:10 +0000 (0:00:00.324) 0:01:03.230 ********** 2025-05-03 00:49:35.503969 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.503986 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504004 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504021 | orchestrator | 2025-05-03 00:49:35.504037 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-03 00:49:35.504055 | orchestrator | Saturday 03 May 2025 00:48:10 +0000 (0:00:00.322) 0:01:03.553 ********** 2025-05-03 00:49:35.504072 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504089 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504106 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504121 | orchestrator | 2025-05-03 00:49:35.504136 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-03 00:49:35.504151 | orchestrator | Saturday 03 May 2025 00:48:11 +0000 (0:00:00.232) 0:01:03.785 ********** 2025-05-03 00:49:35.504164 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504179 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504193 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504205 | orchestrator | 2025-05-03 00:49:35.504217 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-03 00:49:35.504231 | orchestrator | Saturday 03 May 2025 00:48:11 +0000 (0:00:00.357) 0:01:04.143 ********** 2025-05-03 00:49:35.504245 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504259 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504272 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504286 | orchestrator | 2025-05-03 00:49:35.504300 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-03 00:49:35.504313 | orchestrator | Saturday 03 May 2025 00:48:11 +0000 (0:00:00.345) 0:01:04.488 ********** 2025-05-03 00:49:35.504326 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504340 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504353 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504367 | orchestrator | 2025-05-03 00:49:35.504381 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-03 00:49:35.504395 | orchestrator | Saturday 03 May 2025 00:48:12 +0000 (0:00:00.373) 0:01:04.862 ********** 2025-05-03 00:49:35.504409 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504423 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504438 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504451 | orchestrator | 2025-05-03 00:49:35.504466 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-03 00:49:35.504479 | orchestrator | Saturday 03 May 2025 00:48:12 +0000 (0:00:00.244) 0:01:05.106 ********** 2025-05-03 00:49:35.504493 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504508 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504521 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504534 | orchestrator | 2025-05-03 00:49:35.504556 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-03 00:49:35.504593 | orchestrator | Saturday 03 May 2025 00:48:12 +0000 (0:00:00.427) 0:01:05.533 ********** 2025-05-03 00:49:35.504609 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504622 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504636 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504651 | orchestrator | 2025-05-03 00:49:35.504678 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-03 00:49:35.504693 | orchestrator | Saturday 03 May 2025 00:48:13 +0000 (0:00:00.539) 0:01:06.073 ********** 2025-05-03 00:49:35.504707 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504721 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504736 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504750 | orchestrator | 2025-05-03 00:49:35.504764 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-03 00:49:35.504787 | orchestrator | Saturday 03 May 2025 00:48:14 +0000 (0:00:00.808) 0:01:06.881 ********** 2025-05-03 00:49:35.504802 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.504816 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.504829 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.504843 | orchestrator | 2025-05-03 00:49:35.504855 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-03 00:49:35.504863 | orchestrator | Saturday 03 May 2025 00:48:14 +0000 (0:00:00.439) 0:01:07.320 ********** 2025-05-03 00:49:35.504872 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:49:35.504881 | orchestrator | 2025-05-03 00:49:35.504890 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-03 00:49:35.504898 | orchestrator | Saturday 03 May 2025 00:48:15 +0000 (0:00:01.300) 0:01:08.621 ********** 2025-05-03 00:49:35.504907 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.504916 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.504924 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.504933 | orchestrator | 2025-05-03 00:49:35.504941 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-03 00:49:35.504950 | orchestrator | Saturday 03 May 2025 00:48:16 +0000 (0:00:00.670) 0:01:09.292 ********** 2025-05-03 00:49:35.504958 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.504967 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.504975 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.504984 | orchestrator | 2025-05-03 00:49:35.504993 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-03 00:49:35.505001 | orchestrator | Saturday 03 May 2025 00:48:17 +0000 (0:00:00.712) 0:01:10.005 ********** 2025-05-03 00:49:35.505010 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.505019 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.505027 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.505036 | orchestrator | 2025-05-03 00:49:35.505044 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-03 00:49:35.505053 | orchestrator | Saturday 03 May 2025 00:48:17 +0000 (0:00:00.529) 0:01:10.534 ********** 2025-05-03 00:49:35.505062 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.505071 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.505079 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.505088 | orchestrator | 2025-05-03 00:49:35.505096 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-03 00:49:35.505105 | orchestrator | Saturday 03 May 2025 00:48:18 +0000 (0:00:00.586) 0:01:11.121 ********** 2025-05-03 00:49:35.505114 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.505122 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.505131 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.505139 | orchestrator | 2025-05-03 00:49:35.505148 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-03 00:49:35.505165 | orchestrator | Saturday 03 May 2025 00:48:18 +0000 (0:00:00.383) 0:01:11.504 ********** 2025-05-03 00:49:35.505174 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.505183 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.505195 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.505204 | orchestrator | 2025-05-03 00:49:35.505212 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-03 00:49:35.505221 | orchestrator | Saturday 03 May 2025 00:48:19 +0000 (0:00:00.489) 0:01:11.994 ********** 2025-05-03 00:49:35.505230 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.505239 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.505247 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.505256 | orchestrator | 2025-05-03 00:49:35.505264 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-03 00:49:35.505305 | orchestrator | Saturday 03 May 2025 00:48:19 +0000 (0:00:00.469) 0:01:12.463 ********** 2025-05-03 00:49:35.505315 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.505324 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.505332 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.505341 | orchestrator | 2025-05-03 00:49:35.505349 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-03 00:49:35.505358 | orchestrator | Saturday 03 May 2025 00:48:20 +0000 (0:00:00.606) 0:01:13.070 ********** 2025-05-03 00:49:35.505368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505502 | orchestrator | 2025-05-03 00:49:35.505511 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-03 00:49:35.505520 | orchestrator | Saturday 03 May 2025 00:48:21 +0000 (0:00:01.506) 0:01:14.577 ********** 2025-05-03 00:49:35.505529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505647 | orchestrator | 2025-05-03 00:49:35.505656 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-03 00:49:35.505665 | orchestrator | Saturday 03 May 2025 00:48:27 +0000 (0:00:05.483) 0:01:20.061 ********** 2025-05-03 00:49:35.505674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.505772 | orchestrator | 2025-05-03 00:49:35.505781 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-03 00:49:35.505789 | orchestrator | Saturday 03 May 2025 00:48:29 +0000 (0:00:02.437) 0:01:22.498 ********** 2025-05-03 00:49:35.505798 | orchestrator | 2025-05-03 00:49:35.505807 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-03 00:49:35.505815 | orchestrator | Saturday 03 May 2025 00:48:29 +0000 (0:00:00.058) 0:01:22.557 ********** 2025-05-03 00:49:35.505824 | orchestrator | 2025-05-03 00:49:35.505832 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-03 00:49:35.505841 | orchestrator | Saturday 03 May 2025 00:48:29 +0000 (0:00:00.055) 0:01:22.612 ********** 2025-05-03 00:49:35.505849 | orchestrator | 2025-05-03 00:49:35.505858 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-03 00:49:35.505870 | orchestrator | Saturday 03 May 2025 00:48:30 +0000 (0:00:00.197) 0:01:22.809 ********** 2025-05-03 00:49:35.505879 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.505888 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:49:35.505897 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:49:35.505906 | orchestrator | 2025-05-03 00:49:35.505914 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-03 00:49:35.505923 | orchestrator | Saturday 03 May 2025 00:48:37 +0000 (0:00:07.633) 0:01:30.443 ********** 2025-05-03 00:49:35.505931 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:49:35.505940 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:49:35.505949 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.505957 | orchestrator | 2025-05-03 00:49:35.505966 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-03 00:49:35.505975 | orchestrator | Saturday 03 May 2025 00:48:44 +0000 (0:00:06.797) 0:01:37.240 ********** 2025-05-03 00:49:35.505983 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:49:35.505992 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:49:35.506001 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.506009 | orchestrator | 2025-05-03 00:49:35.506043 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-03 00:49:35.506053 | orchestrator | Saturday 03 May 2025 00:48:51 +0000 (0:00:07.168) 0:01:44.408 ********** 2025-05-03 00:49:35.506062 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.506071 | orchestrator | 2025-05-03 00:49:35.506080 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-03 00:49:35.506089 | orchestrator | Saturday 03 May 2025 00:48:51 +0000 (0:00:00.134) 0:01:44.543 ********** 2025-05-03 00:49:35.506102 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.506111 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.506120 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.506128 | orchestrator | 2025-05-03 00:49:35.506142 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-03 00:49:35.506151 | orchestrator | Saturday 03 May 2025 00:48:53 +0000 (0:00:01.292) 0:01:45.836 ********** 2025-05-03 00:49:35.506160 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.506168 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.506177 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.506185 | orchestrator | 2025-05-03 00:49:35.506201 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-03 00:49:35.506210 | orchestrator | Saturday 03 May 2025 00:48:53 +0000 (0:00:00.635) 0:01:46.471 ********** 2025-05-03 00:49:35.506219 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.506228 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.506236 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.506245 | orchestrator | 2025-05-03 00:49:35.506254 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-03 00:49:35.506263 | orchestrator | Saturday 03 May 2025 00:48:54 +0000 (0:00:01.112) 0:01:47.584 ********** 2025-05-03 00:49:35.506271 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.506280 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.506289 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.506297 | orchestrator | 2025-05-03 00:49:35.506306 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-03 00:49:35.506315 | orchestrator | Saturday 03 May 2025 00:48:55 +0000 (0:00:00.781) 0:01:48.365 ********** 2025-05-03 00:49:35.506323 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.506332 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.506341 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.506349 | orchestrator | 2025-05-03 00:49:35.506358 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-03 00:49:35.506367 | orchestrator | Saturday 03 May 2025 00:48:56 +0000 (0:00:01.322) 0:01:49.687 ********** 2025-05-03 00:49:35.506375 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.506384 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.506393 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.506401 | orchestrator | 2025-05-03 00:49:35.506410 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-03 00:49:35.506419 | orchestrator | Saturday 03 May 2025 00:48:57 +0000 (0:00:00.814) 0:01:50.502 ********** 2025-05-03 00:49:35.506428 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.506436 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.506445 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.506453 | orchestrator | 2025-05-03 00:49:35.506462 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-03 00:49:35.506471 | orchestrator | Saturday 03 May 2025 00:48:58 +0000 (0:00:00.488) 0:01:50.991 ********** 2025-05-03 00:49:35.506480 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506489 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506499 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506512 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506521 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506530 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506543 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506552 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506561 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506624 | orchestrator | 2025-05-03 00:49:35.506639 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-03 00:49:35.506654 | orchestrator | Saturday 03 May 2025 00:48:59 +0000 (0:00:01.594) 0:01:52.585 ********** 2025-05-03 00:49:35.506669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506679 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506706 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506738 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506765 | orchestrator | 2025-05-03 00:49:35.506774 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-03 00:49:35.506782 | orchestrator | Saturday 03 May 2025 00:49:03 +0000 (0:00:04.002) 0:01:56.588 ********** 2025-05-03 00:49:35.506791 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506809 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506821 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506833 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506842 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506854 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506878 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 00:49:35.506886 | orchestrator | 2025-05-03 00:49:35.506895 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-03 00:49:35.506904 | orchestrator | Saturday 03 May 2025 00:49:07 +0000 (0:00:03.219) 0:01:59.808 ********** 2025-05-03 00:49:35.506912 | orchestrator | 2025-05-03 00:49:35.506921 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-03 00:49:35.506930 | orchestrator | Saturday 03 May 2025 00:49:07 +0000 (0:00:00.346) 0:02:00.155 ********** 2025-05-03 00:49:35.506938 | orchestrator | 2025-05-03 00:49:35.506947 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-03 00:49:35.506956 | orchestrator | Saturday 03 May 2025 00:49:07 +0000 (0:00:00.060) 0:02:00.216 ********** 2025-05-03 00:49:35.506964 | orchestrator | 2025-05-03 00:49:35.506973 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-03 00:49:35.506981 | orchestrator | Saturday 03 May 2025 00:49:07 +0000 (0:00:00.059) 0:02:00.275 ********** 2025-05-03 00:49:35.506990 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:49:35.507003 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:49:35.507012 | orchestrator | 2025-05-03 00:49:35.507020 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-03 00:49:35.507029 | orchestrator | Saturday 03 May 2025 00:49:14 +0000 (0:00:06.687) 0:02:06.963 ********** 2025-05-03 00:49:35.507037 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:49:35.507045 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:49:35.507053 | orchestrator | 2025-05-03 00:49:35.507061 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-03 00:49:35.507069 | orchestrator | Saturday 03 May 2025 00:49:20 +0000 (0:00:06.221) 0:02:13.185 ********** 2025-05-03 00:49:35.507076 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:49:35.507084 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:49:35.507092 | orchestrator | 2025-05-03 00:49:35.507100 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-03 00:49:35.507108 | orchestrator | Saturday 03 May 2025 00:49:26 +0000 (0:00:06.285) 0:02:19.470 ********** 2025-05-03 00:49:35.507116 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:49:35.507124 | orchestrator | 2025-05-03 00:49:35.507132 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-03 00:49:35.507140 | orchestrator | Saturday 03 May 2025 00:49:27 +0000 (0:00:00.313) 0:02:19.784 ********** 2025-05-03 00:49:35.507148 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.507155 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.507163 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.507171 | orchestrator | 2025-05-03 00:49:35.507179 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-03 00:49:35.507187 | orchestrator | Saturday 03 May 2025 00:49:27 +0000 (0:00:00.815) 0:02:20.599 ********** 2025-05-03 00:49:35.507195 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.507203 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.507211 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.507219 | orchestrator | 2025-05-03 00:49:35.507227 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-03 00:49:35.507235 | orchestrator | Saturday 03 May 2025 00:49:28 +0000 (0:00:00.623) 0:02:21.223 ********** 2025-05-03 00:49:35.507243 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.507257 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.507266 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.507274 | orchestrator | 2025-05-03 00:49:35.507282 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-03 00:49:35.507290 | orchestrator | Saturday 03 May 2025 00:49:29 +0000 (0:00:00.966) 0:02:22.189 ********** 2025-05-03 00:49:35.507298 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:49:35.507305 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:49:35.507313 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:49:35.507321 | orchestrator | 2025-05-03 00:49:35.507329 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-03 00:49:35.507337 | orchestrator | Saturday 03 May 2025 00:49:30 +0000 (0:00:00.755) 0:02:22.944 ********** 2025-05-03 00:49:35.507345 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.507353 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.507361 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.507369 | orchestrator | 2025-05-03 00:49:35.507377 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-03 00:49:35.507385 | orchestrator | Saturday 03 May 2025 00:49:30 +0000 (0:00:00.700) 0:02:23.645 ********** 2025-05-03 00:49:35.507393 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:49:35.507401 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:49:35.507409 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:49:35.507417 | orchestrator | 2025-05-03 00:49:35.507425 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:49:35.507433 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-03 00:49:35.507446 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-03 00:49:35.507459 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-03 00:49:38.522225 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:49:38.522345 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:49:38.522365 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 00:49:38.522380 | orchestrator | 2025-05-03 00:49:38.522395 | orchestrator | 2025-05-03 00:49:38.522410 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:49:38.522425 | orchestrator | Saturday 03 May 2025 00:49:32 +0000 (0:00:01.322) 0:02:24.967 ********** 2025-05-03 00:49:38.522439 | orchestrator | =============================================================================== 2025-05-03 00:49:38.522453 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.12s 2025-05-03 00:49:38.522467 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.16s 2025-05-03 00:49:38.522480 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.32s 2025-05-03 00:49:38.522494 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.45s 2025-05-03 00:49:38.522508 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.02s 2025-05-03 00:49:38.522523 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.48s 2025-05-03 00:49:38.522544 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.00s 2025-05-03 00:49:38.522559 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.22s 2025-05-03 00:49:38.522623 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.80s 2025-05-03 00:49:38.522638 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.44s 2025-05-03 00:49:38.522652 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.11s 2025-05-03 00:49:38.522666 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.05s 2025-05-03 00:49:38.522680 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.70s 2025-05-03 00:49:38.522695 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.66s 2025-05-03 00:49:38.522709 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.62s 2025-05-03 00:49:38.522723 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.59s 2025-05-03 00:49:38.522736 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2025-05-03 00:49:38.522750 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.47s 2025-05-03 00:49:38.522765 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.39s 2025-05-03 00:49:38.522781 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.32s 2025-05-03 00:49:38.522797 | orchestrator | 2025-05-03 00:49:35 | INFO  | Task ea02c298-66ad-47bd-ac8a-8cb3fb2b5ef8 is in state SUCCESS 2025-05-03 00:49:38.522813 | orchestrator | 2025-05-03 00:49:35 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:38.522829 | orchestrator | 2025-05-03 00:49:35 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:38.522845 | orchestrator | 2025-05-03 00:49:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:38.522884 | orchestrator | 2025-05-03 00:49:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:38.522916 | orchestrator | 2025-05-03 00:49:38 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:41.560242 | orchestrator | 2025-05-03 00:49:38 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:41.560358 | orchestrator | 2025-05-03 00:49:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:41.560379 | orchestrator | 2025-05-03 00:49:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:41.560409 | orchestrator | 2025-05-03 00:49:41 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:41.561250 | orchestrator | 2025-05-03 00:49:41 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:41.562304 | orchestrator | 2025-05-03 00:49:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:44.613498 | orchestrator | 2025-05-03 00:49:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:44.613667 | orchestrator | 2025-05-03 00:49:44 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:44.614240 | orchestrator | 2025-05-03 00:49:44 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:44.616420 | orchestrator | 2025-05-03 00:49:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:47.668461 | orchestrator | 2025-05-03 00:49:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:47.668646 | orchestrator | 2025-05-03 00:49:47 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:47.671146 | orchestrator | 2025-05-03 00:49:47 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:47.673667 | orchestrator | 2025-05-03 00:49:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:50.728954 | orchestrator | 2025-05-03 00:49:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:50.729112 | orchestrator | 2025-05-03 00:49:50 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:50.731645 | orchestrator | 2025-05-03 00:49:50 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:50.733155 | orchestrator | 2025-05-03 00:49:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:53.788736 | orchestrator | 2025-05-03 00:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:53.788916 | orchestrator | 2025-05-03 00:49:53 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:56.834817 | orchestrator | 2025-05-03 00:49:53 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:56.834964 | orchestrator | 2025-05-03 00:49:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:56.834986 | orchestrator | 2025-05-03 00:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:56.835048 | orchestrator | 2025-05-03 00:49:56 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:56.835633 | orchestrator | 2025-05-03 00:49:56 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:56.835671 | orchestrator | 2025-05-03 00:49:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:59.889963 | orchestrator | 2025-05-03 00:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:49:59.890244 | orchestrator | 2025-05-03 00:49:59 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:49:59.890383 | orchestrator | 2025-05-03 00:49:59 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:49:59.891163 | orchestrator | 2025-05-03 00:49:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:49:59.891393 | orchestrator | 2025-05-03 00:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:02.937018 | orchestrator | 2025-05-03 00:50:02 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:02.938107 | orchestrator | 2025-05-03 00:50:02 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:02.939743 | orchestrator | 2025-05-03 00:50:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:05.988033 | orchestrator | 2025-05-03 00:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:05.988203 | orchestrator | 2025-05-03 00:50:05 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:05.989310 | orchestrator | 2025-05-03 00:50:05 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:05.991509 | orchestrator | 2025-05-03 00:50:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:09.040896 | orchestrator | 2025-05-03 00:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:09.041064 | orchestrator | 2025-05-03 00:50:09 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:09.042331 | orchestrator | 2025-05-03 00:50:09 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:09.046320 | orchestrator | 2025-05-03 00:50:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:12.096120 | orchestrator | 2025-05-03 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:12.096296 | orchestrator | 2025-05-03 00:50:12 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:12.101494 | orchestrator | 2025-05-03 00:50:12 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:15.146093 | orchestrator | 2025-05-03 00:50:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:15.146265 | orchestrator | 2025-05-03 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:15.146307 | orchestrator | 2025-05-03 00:50:15 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:15.151239 | orchestrator | 2025-05-03 00:50:15 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:15.153988 | orchestrator | 2025-05-03 00:50:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:15.154189 | orchestrator | 2025-05-03 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:18.205754 | orchestrator | 2025-05-03 00:50:18 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:18.207148 | orchestrator | 2025-05-03 00:50:18 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:18.208311 | orchestrator | 2025-05-03 00:50:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:21.258577 | orchestrator | 2025-05-03 00:50:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:21.258799 | orchestrator | 2025-05-03 00:50:21 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:21.259570 | orchestrator | 2025-05-03 00:50:21 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:21.259608 | orchestrator | 2025-05-03 00:50:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:24.323612 | orchestrator | 2025-05-03 00:50:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:24.323771 | orchestrator | 2025-05-03 00:50:24 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:24.324950 | orchestrator | 2025-05-03 00:50:24 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:24.328291 | orchestrator | 2025-05-03 00:50:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:27.380053 | orchestrator | 2025-05-03 00:50:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:27.380230 | orchestrator | 2025-05-03 00:50:27 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:27.382986 | orchestrator | 2025-05-03 00:50:27 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:27.385727 | orchestrator | 2025-05-03 00:50:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:30.439625 | orchestrator | 2025-05-03 00:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:30.439804 | orchestrator | 2025-05-03 00:50:30 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:30.440714 | orchestrator | 2025-05-03 00:50:30 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:30.440748 | orchestrator | 2025-05-03 00:50:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:30.440945 | orchestrator | 2025-05-03 00:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:33.489321 | orchestrator | 2025-05-03 00:50:33 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:33.490352 | orchestrator | 2025-05-03 00:50:33 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:33.491375 | orchestrator | 2025-05-03 00:50:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:33.491748 | orchestrator | 2025-05-03 00:50:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:36.533216 | orchestrator | 2025-05-03 00:50:36 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:36.533500 | orchestrator | 2025-05-03 00:50:36 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:36.535252 | orchestrator | 2025-05-03 00:50:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:39.570946 | orchestrator | 2025-05-03 00:50:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:39.571074 | orchestrator | 2025-05-03 00:50:39 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:39.573028 | orchestrator | 2025-05-03 00:50:39 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:42.608325 | orchestrator | 2025-05-03 00:50:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:42.608416 | orchestrator | 2025-05-03 00:50:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:42.608450 | orchestrator | 2025-05-03 00:50:42 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:42.608685 | orchestrator | 2025-05-03 00:50:42 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:42.609331 | orchestrator | 2025-05-03 00:50:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:45.654655 | orchestrator | 2025-05-03 00:50:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:45.654731 | orchestrator | 2025-05-03 00:50:45 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:45.656453 | orchestrator | 2025-05-03 00:50:45 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:45.657112 | orchestrator | 2025-05-03 00:50:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:45.657198 | orchestrator | 2025-05-03 00:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:48.716315 | orchestrator | 2025-05-03 00:50:48 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:48.716499 | orchestrator | 2025-05-03 00:50:48 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:48.717364 | orchestrator | 2025-05-03 00:50:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:48.717630 | orchestrator | 2025-05-03 00:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:51.758072 | orchestrator | 2025-05-03 00:50:51 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:51.760370 | orchestrator | 2025-05-03 00:50:51 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:51.760497 | orchestrator | 2025-05-03 00:50:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:51.760660 | orchestrator | 2025-05-03 00:50:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:54.800188 | orchestrator | 2025-05-03 00:50:54 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:54.800899 | orchestrator | 2025-05-03 00:50:54 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:54.801135 | orchestrator | 2025-05-03 00:50:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:57.829415 | orchestrator | 2025-05-03 00:50:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:50:57.829572 | orchestrator | 2025-05-03 00:50:57 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:50:57.829658 | orchestrator | 2025-05-03 00:50:57 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:50:57.829947 | orchestrator | 2025-05-03 00:50:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:50:57.830148 | orchestrator | 2025-05-03 00:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:00.882316 | orchestrator | 2025-05-03 00:51:00 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:00.884602 | orchestrator | 2025-05-03 00:51:00 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:00.886479 | orchestrator | 2025-05-03 00:51:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:03.926137 | orchestrator | 2025-05-03 00:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:03.926284 | orchestrator | 2025-05-03 00:51:03 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:03.928704 | orchestrator | 2025-05-03 00:51:03 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:03.929665 | orchestrator | 2025-05-03 00:51:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:06.971331 | orchestrator | 2025-05-03 00:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:06.971469 | orchestrator | 2025-05-03 00:51:06 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:06.972989 | orchestrator | 2025-05-03 00:51:06 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:06.975380 | orchestrator | 2025-05-03 00:51:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:10.027322 | orchestrator | 2025-05-03 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:10.027463 | orchestrator | 2025-05-03 00:51:10 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:13.075983 | orchestrator | 2025-05-03 00:51:10 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:13.076106 | orchestrator | 2025-05-03 00:51:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:13.076127 | orchestrator | 2025-05-03 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:13.076160 | orchestrator | 2025-05-03 00:51:13 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:13.077660 | orchestrator | 2025-05-03 00:51:13 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:13.081183 | orchestrator | 2025-05-03 00:51:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:16.123041 | orchestrator | 2025-05-03 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:16.123215 | orchestrator | 2025-05-03 00:51:16 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:16.126407 | orchestrator | 2025-05-03 00:51:16 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:19.171481 | orchestrator | 2025-05-03 00:51:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:19.171675 | orchestrator | 2025-05-03 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:19.171711 | orchestrator | 2025-05-03 00:51:19 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:19.172695 | orchestrator | 2025-05-03 00:51:19 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:19.175721 | orchestrator | 2025-05-03 00:51:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:22.220372 | orchestrator | 2025-05-03 00:51:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:22.220596 | orchestrator | 2025-05-03 00:51:22 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:22.221263 | orchestrator | 2025-05-03 00:51:22 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:22.222365 | orchestrator | 2025-05-03 00:51:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:22.222592 | orchestrator | 2025-05-03 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:25.271735 | orchestrator | 2025-05-03 00:51:25 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:25.272812 | orchestrator | 2025-05-03 00:51:25 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:25.274754 | orchestrator | 2025-05-03 00:51:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:28.324401 | orchestrator | 2025-05-03 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:28.324579 | orchestrator | 2025-05-03 00:51:28 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:31.386721 | orchestrator | 2025-05-03 00:51:28 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:31.386839 | orchestrator | 2025-05-03 00:51:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:31.386861 | orchestrator | 2025-05-03 00:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:31.386895 | orchestrator | 2025-05-03 00:51:31 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:31.387454 | orchestrator | 2025-05-03 00:51:31 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:31.387524 | orchestrator | 2025-05-03 00:51:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:34.439967 | orchestrator | 2025-05-03 00:51:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:34.440114 | orchestrator | 2025-05-03 00:51:34 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:34.447074 | orchestrator | 2025-05-03 00:51:34 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:34.447741 | orchestrator | 2025-05-03 00:51:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:34.447799 | orchestrator | 2025-05-03 00:51:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:37.492026 | orchestrator | 2025-05-03 00:51:37 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:37.493163 | orchestrator | 2025-05-03 00:51:37 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:37.494379 | orchestrator | 2025-05-03 00:51:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:40.541441 | orchestrator | 2025-05-03 00:51:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:40.541693 | orchestrator | 2025-05-03 00:51:40 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:40.542711 | orchestrator | 2025-05-03 00:51:40 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:40.544778 | orchestrator | 2025-05-03 00:51:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:43.601345 | orchestrator | 2025-05-03 00:51:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:43.601462 | orchestrator | 2025-05-03 00:51:43 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:43.601633 | orchestrator | 2025-05-03 00:51:43 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:43.603582 | orchestrator | 2025-05-03 00:51:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:46.654916 | orchestrator | 2025-05-03 00:51:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:46.655082 | orchestrator | 2025-05-03 00:51:46 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:46.658848 | orchestrator | 2025-05-03 00:51:46 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:49.714211 | orchestrator | 2025-05-03 00:51:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:49.714365 | orchestrator | 2025-05-03 00:51:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:49.714408 | orchestrator | 2025-05-03 00:51:49 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:49.715214 | orchestrator | 2025-05-03 00:51:49 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:49.716353 | orchestrator | 2025-05-03 00:51:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:52.772037 | orchestrator | 2025-05-03 00:51:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:52.772213 | orchestrator | 2025-05-03 00:51:52 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:52.773516 | orchestrator | 2025-05-03 00:51:52 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:52.775910 | orchestrator | 2025-05-03 00:51:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:52.776246 | orchestrator | 2025-05-03 00:51:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:55.824846 | orchestrator | 2025-05-03 00:51:55 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:55.826448 | orchestrator | 2025-05-03 00:51:55 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:55.828745 | orchestrator | 2025-05-03 00:51:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:58.874932 | orchestrator | 2025-05-03 00:51:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:51:58.875110 | orchestrator | 2025-05-03 00:51:58 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:51:58.877647 | orchestrator | 2025-05-03 00:51:58 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:51:58.879003 | orchestrator | 2025-05-03 00:51:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:51:58.879524 | orchestrator | 2025-05-03 00:51:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:01.925948 | orchestrator | 2025-05-03 00:52:01 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:01.930727 | orchestrator | 2025-05-03 00:52:01 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:01.931904 | orchestrator | 2025-05-03 00:52:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:04.981443 | orchestrator | 2025-05-03 00:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:04.981667 | orchestrator | 2025-05-03 00:52:04 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:04.985157 | orchestrator | 2025-05-03 00:52:04 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:04.987510 | orchestrator | 2025-05-03 00:52:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:04.987634 | orchestrator | 2025-05-03 00:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:08.027098 | orchestrator | 2025-05-03 00:52:08 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:08.027935 | orchestrator | 2025-05-03 00:52:08 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:08.029918 | orchestrator | 2025-05-03 00:52:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:11.075992 | orchestrator | 2025-05-03 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:11.076163 | orchestrator | 2025-05-03 00:52:11 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:11.077907 | orchestrator | 2025-05-03 00:52:11 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:11.079917 | orchestrator | 2025-05-03 00:52:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:11.080064 | orchestrator | 2025-05-03 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:14.130331 | orchestrator | 2025-05-03 00:52:14 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:14.132565 | orchestrator | 2025-05-03 00:52:14 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:14.132660 | orchestrator | 2025-05-03 00:52:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:17.183213 | orchestrator | 2025-05-03 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:17.183410 | orchestrator | 2025-05-03 00:52:17 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:17.184582 | orchestrator | 2025-05-03 00:52:17 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:17.185665 | orchestrator | 2025-05-03 00:52:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:17.185999 | orchestrator | 2025-05-03 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:20.237956 | orchestrator | 2025-05-03 00:52:20 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:20.239510 | orchestrator | 2025-05-03 00:52:20 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:20.241832 | orchestrator | 2025-05-03 00:52:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:20.243821 | orchestrator | 2025-05-03 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:23.292863 | orchestrator | 2025-05-03 00:52:23 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:23.294328 | orchestrator | 2025-05-03 00:52:23 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:23.296097 | orchestrator | 2025-05-03 00:52:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:26.340657 | orchestrator | 2025-05-03 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:26.340775 | orchestrator | 2025-05-03 00:52:26 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:26.341291 | orchestrator | 2025-05-03 00:52:26 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:26.342180 | orchestrator | 2025-05-03 00:52:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:29.385820 | orchestrator | 2025-05-03 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:29.385983 | orchestrator | 2025-05-03 00:52:29 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:29.386745 | orchestrator | 2025-05-03 00:52:29 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:29.388118 | orchestrator | 2025-05-03 00:52:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:32.443100 | orchestrator | 2025-05-03 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:32.443280 | orchestrator | 2025-05-03 00:52:32 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:32.444132 | orchestrator | 2025-05-03 00:52:32 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:32.445585 | orchestrator | 2025-05-03 00:52:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:35.489914 | orchestrator | 2025-05-03 00:52:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:35.490127 | orchestrator | 2025-05-03 00:52:35 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:35.492046 | orchestrator | 2025-05-03 00:52:35 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:35.492968 | orchestrator | 2025-05-03 00:52:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:38.529353 | orchestrator | 2025-05-03 00:52:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:38.529556 | orchestrator | 2025-05-03 00:52:38 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:38.536601 | orchestrator | 2025-05-03 00:52:38 | INFO  | Task c6827bf1-414e-4daa-87d4-9996e66de25f is in state STARTED 2025-05-03 00:52:38.537993 | orchestrator | 2025-05-03 00:52:38 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:38.539990 | orchestrator | 2025-05-03 00:52:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:41.594324 | orchestrator | 2025-05-03 00:52:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:41.594532 | orchestrator | 2025-05-03 00:52:41 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:41.597520 | orchestrator | 2025-05-03 00:52:41 | INFO  | Task c6827bf1-414e-4daa-87d4-9996e66de25f is in state STARTED 2025-05-03 00:52:41.601512 | orchestrator | 2025-05-03 00:52:41 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:44.657407 | orchestrator | 2025-05-03 00:52:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:44.657568 | orchestrator | 2025-05-03 00:52:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:44.657597 | orchestrator | 2025-05-03 00:52:44 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:44.661373 | orchestrator | 2025-05-03 00:52:44 | INFO  | Task c6827bf1-414e-4daa-87d4-9996e66de25f is in state STARTED 2025-05-03 00:52:44.661418 | orchestrator | 2025-05-03 00:52:44 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:44.662650 | orchestrator | 2025-05-03 00:52:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:44.662816 | orchestrator | 2025-05-03 00:52:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:47.724737 | orchestrator | 2025-05-03 00:52:47 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:50.770249 | orchestrator | 2025-05-03 00:52:47 | INFO  | Task c6827bf1-414e-4daa-87d4-9996e66de25f is in state STARTED 2025-05-03 00:52:50.770388 | orchestrator | 2025-05-03 00:52:47 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:50.770408 | orchestrator | 2025-05-03 00:52:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:50.770468 | orchestrator | 2025-05-03 00:52:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:50.770498 | orchestrator | 2025-05-03 00:52:50 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:50.770604 | orchestrator | 2025-05-03 00:52:50 | INFO  | Task c6827bf1-414e-4daa-87d4-9996e66de25f is in state SUCCESS 2025-05-03 00:52:50.770628 | orchestrator | 2025-05-03 00:52:50 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:50.771287 | orchestrator | 2025-05-03 00:52:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:53.808877 | orchestrator | 2025-05-03 00:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:53.809057 | orchestrator | 2025-05-03 00:52:53 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:53.810579 | orchestrator | 2025-05-03 00:52:53 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:53.811743 | orchestrator | 2025-05-03 00:52:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:53.812027 | orchestrator | 2025-05-03 00:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:56.872102 | orchestrator | 2025-05-03 00:52:56 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:56.873549 | orchestrator | 2025-05-03 00:52:56 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:56.875266 | orchestrator | 2025-05-03 00:52:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:52:56.878468 | orchestrator | 2025-05-03 00:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:52:59.943556 | orchestrator | 2025-05-03 00:52:59 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:52:59.945271 | orchestrator | 2025-05-03 00:52:59 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:52:59.946468 | orchestrator | 2025-05-03 00:52:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:02.993681 | orchestrator | 2025-05-03 00:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:02.993823 | orchestrator | 2025-05-03 00:53:02 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:02.995904 | orchestrator | 2025-05-03 00:53:02 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state STARTED 2025-05-03 00:53:02.996305 | orchestrator | 2025-05-03 00:53:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:06.055694 | orchestrator | 2025-05-03 00:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:06.055798 | orchestrator | 2025-05-03 00:53:06 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:06.056639 | orchestrator | 2025-05-03 00:53:06 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:06.066697 | orchestrator | 2025-05-03 00:53:06 | INFO  | Task 50ea52a1-6660-489a-988a-e6c4a3d15730 is in state SUCCESS 2025-05-03 00:53:06.071371 | orchestrator | 2025-05-03 00:53:06.071588 | orchestrator | None 2025-05-03 00:53:06.071615 | orchestrator | 2025-05-03 00:53:06.071631 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:53:06.071648 | orchestrator | 2025-05-03 00:53:06.071663 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:53:06.071681 | orchestrator | Saturday 03 May 2025 00:45:49 +0000 (0:00:00.698) 0:00:00.698 ********** 2025-05-03 00:53:06.071715 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.071727 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.071737 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.071747 | orchestrator | 2025-05-03 00:53:06.071756 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:53:06.071783 | orchestrator | Saturday 03 May 2025 00:45:49 +0000 (0:00:00.431) 0:00:01.129 ********** 2025-05-03 00:53:06.071794 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-03 00:53:06.071804 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-03 00:53:06.071814 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-03 00:53:06.071823 | orchestrator | 2025-05-03 00:53:06.071833 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-03 00:53:06.071842 | orchestrator | 2025-05-03 00:53:06.071851 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-03 00:53:06.071861 | orchestrator | Saturday 03 May 2025 00:45:50 +0000 (0:00:00.320) 0:00:01.450 ********** 2025-05-03 00:53:06.071871 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.071880 | orchestrator | 2025-05-03 00:53:06.071890 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-03 00:53:06.071899 | orchestrator | Saturday 03 May 2025 00:45:51 +0000 (0:00:00.720) 0:00:02.170 ********** 2025-05-03 00:53:06.071908 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.071918 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.071927 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.071939 | orchestrator | 2025-05-03 00:53:06.071950 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-03 00:53:06.071962 | orchestrator | Saturday 03 May 2025 00:45:52 +0000 (0:00:01.062) 0:00:03.233 ********** 2025-05-03 00:53:06.071973 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.071984 | orchestrator | 2025-05-03 00:53:06.071995 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-03 00:53:06.072044 | orchestrator | Saturday 03 May 2025 00:45:52 +0000 (0:00:00.588) 0:00:03.821 ********** 2025-05-03 00:53:06.072057 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.072067 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.072078 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.072089 | orchestrator | 2025-05-03 00:53:06.072100 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-03 00:53:06.072111 | orchestrator | Saturday 03 May 2025 00:45:53 +0000 (0:00:00.993) 0:00:04.815 ********** 2025-05-03 00:53:06.072150 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-03 00:53:06.072161 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-03 00:53:06.072173 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-03 00:53:06.072185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-03 00:53:06.072196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-03 00:53:06.072206 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-03 00:53:06.072216 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-03 00:53:06.072260 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-03 00:53:06.072272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-03 00:53:06.072284 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-03 00:53:06.072295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-03 00:53:06.072304 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-03 00:53:06.072314 | orchestrator | 2025-05-03 00:53:06.072323 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-03 00:53:06.072338 | orchestrator | Saturday 03 May 2025 00:45:56 +0000 (0:00:02.840) 0:00:07.656 ********** 2025-05-03 00:53:06.072348 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-03 00:53:06.072364 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-03 00:53:06.072374 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-03 00:53:06.072384 | orchestrator | 2025-05-03 00:53:06.072393 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-03 00:53:06.072560 | orchestrator | Saturday 03 May 2025 00:45:57 +0000 (0:00:00.830) 0:00:08.486 ********** 2025-05-03 00:53:06.072592 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-03 00:53:06.072602 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-03 00:53:06.072612 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-03 00:53:06.072621 | orchestrator | 2025-05-03 00:53:06.072631 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-03 00:53:06.072652 | orchestrator | Saturday 03 May 2025 00:45:59 +0000 (0:00:01.724) 0:00:10.210 ********** 2025-05-03 00:53:06.072662 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-03 00:53:06.072671 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.072693 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-03 00:53:06.072704 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.072716 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-03 00:53:06.072727 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.072738 | orchestrator | 2025-05-03 00:53:06.072749 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-03 00:53:06.072761 | orchestrator | Saturday 03 May 2025 00:45:59 +0000 (0:00:00.552) 0:00:10.763 ********** 2025-05-03 00:53:06.072774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.072792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.072804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.072817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.072839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.072857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.072869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.072881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.072894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.072906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.072924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.072935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.072947 | orchestrator | 2025-05-03 00:53:06.072958 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-03 00:53:06.072970 | orchestrator | Saturday 03 May 2025 00:46:01 +0000 (0:00:02.126) 0:00:12.889 ********** 2025-05-03 00:53:06.073083 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.073096 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.073129 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.073141 | orchestrator | 2025-05-03 00:53:06.073158 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-03 00:53:06.073170 | orchestrator | Saturday 03 May 2025 00:46:04 +0000 (0:00:02.349) 0:00:15.239 ********** 2025-05-03 00:53:06.073181 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-03 00:53:06.073193 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-03 00:53:06.073205 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-03 00:53:06.073216 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-03 00:53:06.073227 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-03 00:53:06.073239 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-03 00:53:06.073250 | orchestrator | 2025-05-03 00:53:06.073261 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-03 00:53:06.073272 | orchestrator | Saturday 03 May 2025 00:46:07 +0000 (0:00:03.818) 0:00:19.058 ********** 2025-05-03 00:53:06.073284 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.073295 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.073306 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.073317 | orchestrator | 2025-05-03 00:53:06.073328 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-03 00:53:06.073340 | orchestrator | Saturday 03 May 2025 00:46:10 +0000 (0:00:02.859) 0:00:21.917 ********** 2025-05-03 00:53:06.073351 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.073362 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.073373 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.073385 | orchestrator | 2025-05-03 00:53:06.073396 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-03 00:53:06.073442 | orchestrator | Saturday 03 May 2025 00:46:13 +0000 (0:00:02.287) 0:00:24.205 ********** 2025-05-03 00:53:06.073464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.073484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.073523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.073536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.073557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.073570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.073582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.073600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.073612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.073624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.073644 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.073657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.073668 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.073686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.073698 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.073709 | orchestrator | 2025-05-03 00:53:06.073721 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-03 00:53:06.073732 | orchestrator | Saturday 03 May 2025 00:46:15 +0000 (0:00:02.057) 0:00:26.263 ********** 2025-05-03 00:53:06.073819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.073832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.073844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.073856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.073873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.073885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.073902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.073914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.073926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.073938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.073949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.073968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.076717 | orchestrator | 2025-05-03 00:53:06.076871 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-03 00:53:06.076885 | orchestrator | Saturday 03 May 2025 00:46:19 +0000 (0:00:04.218) 0:00:30.482 ********** 2025-05-03 00:53:06.076908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.076920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.076931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.076942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.076953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.076974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.076990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.077001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.077012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.077023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.077034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.077045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.077056 | orchestrator | 2025-05-03 00:53:06.077066 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-03 00:53:06.077076 | orchestrator | Saturday 03 May 2025 00:46:22 +0000 (0:00:03.552) 0:00:34.034 ********** 2025-05-03 00:53:06.077097 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-03 00:53:06.077109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-03 00:53:06.077119 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-03 00:53:06.077129 | orchestrator | 2025-05-03 00:53:06.077139 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-03 00:53:06.077149 | orchestrator | Saturday 03 May 2025 00:46:24 +0000 (0:00:02.124) 0:00:36.159 ********** 2025-05-03 00:53:06.077159 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-03 00:53:06.077169 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-03 00:53:06.077180 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-03 00:53:06.077190 | orchestrator | 2025-05-03 00:53:06.077213 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-03 00:53:06.077225 | orchestrator | Saturday 03 May 2025 00:46:29 +0000 (0:00:04.333) 0:00:40.492 ********** 2025-05-03 00:53:06.077236 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.077247 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.077258 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.077302 | orchestrator | 2025-05-03 00:53:06.077314 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-03 00:53:06.077385 | orchestrator | Saturday 03 May 2025 00:46:30 +0000 (0:00:01.153) 0:00:41.646 ********** 2025-05-03 00:53:06.077398 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-03 00:53:06.077473 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-03 00:53:06.077486 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-03 00:53:06.077498 | orchestrator | 2025-05-03 00:53:06.077510 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-03 00:53:06.077523 | orchestrator | Saturday 03 May 2025 00:46:34 +0000 (0:00:04.100) 0:00:45.747 ********** 2025-05-03 00:53:06.077535 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-03 00:53:06.077547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-03 00:53:06.077560 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-03 00:53:06.077572 | orchestrator | 2025-05-03 00:53:06.077584 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-03 00:53:06.077596 | orchestrator | Saturday 03 May 2025 00:46:36 +0000 (0:00:02.278) 0:00:48.026 ********** 2025-05-03 00:53:06.077608 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-03 00:53:06.077629 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-03 00:53:06.077642 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-03 00:53:06.077661 | orchestrator | 2025-05-03 00:53:06.077672 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-03 00:53:06.077682 | orchestrator | Saturday 03 May 2025 00:46:39 +0000 (0:00:02.417) 0:00:50.443 ********** 2025-05-03 00:53:06.077693 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-03 00:53:06.077722 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-03 00:53:06.077733 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-03 00:53:06.077765 | orchestrator | 2025-05-03 00:53:06.077777 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-03 00:53:06.077793 | orchestrator | Saturday 03 May 2025 00:46:41 +0000 (0:00:02.436) 0:00:52.879 ********** 2025-05-03 00:53:06.077804 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.077814 | orchestrator | 2025-05-03 00:53:06.077824 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-03 00:53:06.077834 | orchestrator | Saturday 03 May 2025 00:46:42 +0000 (0:00:00.585) 0:00:53.465 ********** 2025-05-03 00:53:06.077845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.077869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.077881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.077892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.077903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.077913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.077928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.077951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.077963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.077973 | orchestrator | 2025-05-03 00:53:06.077983 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-03 00:53:06.077994 | orchestrator | Saturday 03 May 2025 00:46:45 +0000 (0:00:03.410) 0:00:56.875 ********** 2025-05-03 00:53:06.078004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.078053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.078067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.078083 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.078094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.078108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.078125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.078136 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.078180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.078192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.078268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.078312 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.078323 | orchestrator | 2025-05-03 00:53:06.078334 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-03 00:53:06.078344 | orchestrator | Saturday 03 May 2025 00:46:46 +0000 (0:00:00.546) 0:00:57.422 ********** 2025-05-03 00:53:06.078355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.078370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.078386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.078397 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.078478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.078492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.078503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.078520 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.078531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-03 00:53:06.078542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-03 00:53:06.078556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-03 00:53:06.078568 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.078578 | orchestrator | 2025-05-03 00:53:06.078589 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-03 00:53:06.078605 | orchestrator | Saturday 03 May 2025 00:46:47 +0000 (0:00:00.945) 0:00:58.367 ********** 2025-05-03 00:53:06.078616 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-03 00:53:06.078627 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-03 00:53:06.078637 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-03 00:53:06.078648 | orchestrator | 2025-05-03 00:53:06.078658 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-03 00:53:06.078667 | orchestrator | Saturday 03 May 2025 00:46:49 +0000 (0:00:02.052) 0:01:00.420 ********** 2025-05-03 00:53:06.078676 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-03 00:53:06.078685 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-03 00:53:06.078696 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-03 00:53:06.078705 | orchestrator | 2025-05-03 00:53:06.078715 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-03 00:53:06.078723 | orchestrator | Saturday 03 May 2025 00:46:51 +0000 (0:00:02.606) 0:01:03.027 ********** 2025-05-03 00:53:06.078732 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-03 00:53:06.078787 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-03 00:53:06.078797 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-03 00:53:06.078806 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-03 00:53:06.078814 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.078823 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-03 00:53:06.078832 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.078840 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-03 00:53:06.078875 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.078885 | orchestrator | 2025-05-03 00:53:06.078894 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-03 00:53:06.078903 | orchestrator | Saturday 03 May 2025 00:46:53 +0000 (0:00:01.705) 0:01:04.733 ********** 2025-05-03 00:53:06.078912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.078922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.078931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-03 00:53:06.078947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.078956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.078974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-03 00:53:06.078984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.078993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.079002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.079015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.079024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-03 00:53:06.079048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19', '__omit_place_holder__f9f602590c262c6eedecea2ca3646a22824f2b19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-03 00:53:06.079057 | orchestrator | 2025-05-03 00:53:06.079066 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-03 00:53:06.079075 | orchestrator | Saturday 03 May 2025 00:46:57 +0000 (0:00:04.045) 0:01:08.778 ********** 2025-05-03 00:53:06.079084 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.079093 | orchestrator | 2025-05-03 00:53:06.079102 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-03 00:53:06.079110 | orchestrator | Saturday 03 May 2025 00:46:58 +0000 (0:00:01.010) 0:01:09.789 ********** 2025-05-03 00:53:06.079119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-03 00:53:06.079129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.079138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-03 00:53:06.079180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.079190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-03 00:53:06.079223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.079236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079254 | orchestrator | 2025-05-03 00:53:06.079289 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-03 00:53:06.079299 | orchestrator | Saturday 03 May 2025 00:47:02 +0000 (0:00:04.178) 0:01:13.967 ********** 2025-05-03 00:53:06.079308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-03 00:53:06.079317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.079326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079494 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.079505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-03 00:53:06.079514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.079523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079541 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.079560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-03 00:53:06.079581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.079592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.079612 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.079629 | orchestrator | 2025-05-03 00:53:06.079639 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-03 00:53:06.079648 | orchestrator | Saturday 03 May 2025 00:47:03 +0000 (0:00:00.788) 0:01:14.755 ********** 2025-05-03 00:53:06.079658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-03 00:53:06.079668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-03 00:53:06.079677 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.079686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-03 00:53:06.079695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-03 00:53:06.079704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-03 00:53:06.079713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-03 00:53:06.079722 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.079731 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.079740 | orchestrator | 2025-05-03 00:53:06.079748 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-03 00:53:06.079757 | orchestrator | Saturday 03 May 2025 00:47:04 +0000 (0:00:01.214) 0:01:15.970 ********** 2025-05-03 00:53:06.079772 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.079781 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.079789 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.079798 | orchestrator | 2025-05-03 00:53:06.079806 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-03 00:53:06.079815 | orchestrator | Saturday 03 May 2025 00:47:06 +0000 (0:00:01.313) 0:01:17.284 ********** 2025-05-03 00:53:06.079824 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.079833 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.079841 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.079850 | orchestrator | 2025-05-03 00:53:06.079859 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-03 00:53:06.079902 | orchestrator | Saturday 03 May 2025 00:47:08 +0000 (0:00:02.385) 0:01:19.669 ********** 2025-05-03 00:53:06.079912 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.079920 | orchestrator | 2025-05-03 00:53:06.079960 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-03 00:53:06.079970 | orchestrator | Saturday 03 May 2025 00:47:09 +0000 (0:00:00.828) 0:01:20.497 ********** 2025-05-03 00:53:06.079985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.080006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.080042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.080056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080105 | orchestrator | 2025-05-03 00:53:06.080113 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-03 00:53:06.080122 | orchestrator | Saturday 03 May 2025 00:47:14 +0000 (0:00:04.967) 0:01:25.465 ********** 2025-05-03 00:53:06.080131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.080147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.080163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080199 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.080208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080217 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.080232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.080377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.080399 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.080422 | orchestrator | 2025-05-03 00:53:06.080431 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-03 00:53:06.080441 | orchestrator | Saturday 03 May 2025 00:47:15 +0000 (0:00:00.967) 0:01:26.432 ********** 2025-05-03 00:53:06.080456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-03 00:53:06.080465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-03 00:53:06.080474 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.080483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-03 00:53:06.080497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-03 00:53:06.080506 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.080515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-03 00:53:06.080524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-03 00:53:06.080533 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.080577 | orchestrator | 2025-05-03 00:53:06.080587 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-03 00:53:06.080596 | orchestrator | Saturday 03 May 2025 00:47:16 +0000 (0:00:00.852) 0:01:27.285 ********** 2025-05-03 00:53:06.080605 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.080613 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.080622 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.080630 | orchestrator | 2025-05-03 00:53:06.080639 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-03 00:53:06.080648 | orchestrator | Saturday 03 May 2025 00:47:17 +0000 (0:00:01.487) 0:01:28.773 ********** 2025-05-03 00:53:06.080657 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.080665 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.080674 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.080682 | orchestrator | 2025-05-03 00:53:06.080691 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-03 00:53:06.080700 | orchestrator | Saturday 03 May 2025 00:47:19 +0000 (0:00:02.064) 0:01:30.838 ********** 2025-05-03 00:53:06.080708 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.080717 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.080725 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.080734 | orchestrator | 2025-05-03 00:53:06.080749 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-03 00:53:06.080758 | orchestrator | Saturday 03 May 2025 00:47:19 +0000 (0:00:00.311) 0:01:31.149 ********** 2025-05-03 00:53:06.080766 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.080775 | orchestrator | 2025-05-03 00:53:06.080783 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-03 00:53:06.080792 | orchestrator | Saturday 03 May 2025 00:47:20 +0000 (0:00:00.799) 0:01:31.948 ********** 2025-05-03 00:53:06.080801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-03 00:53:06.080816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-03 00:53:06.080826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-03 00:53:06.080835 | orchestrator | 2025-05-03 00:53:06.080843 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-03 00:53:06.080852 | orchestrator | Saturday 03 May 2025 00:47:23 +0000 (0:00:02.819) 0:01:34.768 ********** 2025-05-03 00:53:06.080869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-03 00:53:06.080915 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.080932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-03 00:53:06.080946 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.080956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-03 00:53:06.080965 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.080973 | orchestrator | 2025-05-03 00:53:06.080982 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-03 00:53:06.080991 | orchestrator | Saturday 03 May 2025 00:47:25 +0000 (0:00:01.574) 0:01:36.342 ********** 2025-05-03 00:53:06.081001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-03 00:53:06.081011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-03 00:53:06.081042 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.081052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-03 00:53:06.081061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-03 00:53:06.081070 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.081079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-03 00:53:06.081097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-03 00:53:06.081140 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.081151 | orchestrator | 2025-05-03 00:53:06.081160 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-03 00:53:06.081169 | orchestrator | Saturday 03 May 2025 00:47:27 +0000 (0:00:01.855) 0:01:38.198 ********** 2025-05-03 00:53:06.081177 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.081186 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.081194 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.081203 | orchestrator | 2025-05-03 00:53:06.081212 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-03 00:53:06.081220 | orchestrator | Saturday 03 May 2025 00:47:27 +0000 (0:00:00.665) 0:01:38.864 ********** 2025-05-03 00:53:06.081229 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.081238 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.081246 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.081255 | orchestrator | 2025-05-03 00:53:06.081264 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-03 00:53:06.081272 | orchestrator | Saturday 03 May 2025 00:47:28 +0000 (0:00:01.150) 0:01:40.014 ********** 2025-05-03 00:53:06.081281 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.081290 | orchestrator | 2025-05-03 00:53:06.081298 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-03 00:53:06.081307 | orchestrator | Saturday 03 May 2025 00:47:29 +0000 (0:00:00.746) 0:01:40.761 ********** 2025-05-03 00:53:06.081316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.081334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.081402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.081556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081590 | orchestrator | 2025-05-03 00:53:06.081599 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-03 00:53:06.081611 | orchestrator | Saturday 03 May 2025 00:47:33 +0000 (0:00:03.620) 0:01:44.381 ********** 2025-05-03 00:53:06.081620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.081637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081702 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.081712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.081720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081773 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.081781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.081790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.081825 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.081834 | orchestrator | 2025-05-03 00:53:06.081842 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-03 00:53:06.081851 | orchestrator | Saturday 03 May 2025 00:47:34 +0000 (0:00:01.039) 0:01:45.420 ********** 2025-05-03 00:53:06.081859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-03 00:53:06.081872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-03 00:53:06.081882 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.081890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-03 00:53:06.081898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-03 00:53:06.081907 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.081915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-03 00:53:06.081923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-03 00:53:06.081931 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.081939 | orchestrator | 2025-05-03 00:53:06.081947 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-03 00:53:06.081955 | orchestrator | Saturday 03 May 2025 00:47:35 +0000 (0:00:01.259) 0:01:46.680 ********** 2025-05-03 00:53:06.081963 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.081971 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.081979 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.081987 | orchestrator | 2025-05-03 00:53:06.081995 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-03 00:53:06.082043 | orchestrator | Saturday 03 May 2025 00:47:37 +0000 (0:00:01.522) 0:01:48.202 ********** 2025-05-03 00:53:06.082052 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.082069 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.082078 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.082086 | orchestrator | 2025-05-03 00:53:06.082095 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-03 00:53:06.082103 | orchestrator | Saturday 03 May 2025 00:47:39 +0000 (0:00:02.213) 0:01:50.416 ********** 2025-05-03 00:53:06.082112 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.082146 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.082161 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.082173 | orchestrator | 2025-05-03 00:53:06.082181 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-03 00:53:06.082189 | orchestrator | Saturday 03 May 2025 00:47:39 +0000 (0:00:00.309) 0:01:50.725 ********** 2025-05-03 00:53:06.082197 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.082205 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.082213 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.082221 | orchestrator | 2025-05-03 00:53:06.082230 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-03 00:53:06.082238 | orchestrator | Saturday 03 May 2025 00:47:40 +0000 (0:00:00.478) 0:01:51.204 ********** 2025-05-03 00:53:06.082246 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.082254 | orchestrator | 2025-05-03 00:53:06.082262 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-03 00:53:06.082270 | orchestrator | Saturday 03 May 2025 00:47:41 +0000 (0:00:01.122) 0:01:52.327 ********** 2025-05-03 00:53:06.082279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 00:53:06.082294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 00:53:06.082303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 00:53:06.082371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 00:53:06.082380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 00:53:06.082464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 00:53:06.082476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082524 | orchestrator | 2025-05-03 00:53:06.082537 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-03 00:53:06.082546 | orchestrator | Saturday 03 May 2025 00:47:47 +0000 (0:00:06.286) 0:01:58.614 ********** 2025-05-03 00:53:06.082554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 00:53:06.082575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 00:53:06.082584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082634 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.082648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 00:53:06.082657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 00:53:06.082666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 00:53:06.082724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082733 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.082742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 00:53:06.082750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.082807 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.082816 | orchestrator | 2025-05-03 00:53:06.082824 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-03 00:53:06.082832 | orchestrator | Saturday 03 May 2025 00:47:48 +0000 (0:00:01.284) 0:01:59.898 ********** 2025-05-03 00:53:06.082840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-03 00:53:06.082849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-03 00:53:06.082858 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.082866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-03 00:53:06.082874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-03 00:53:06.082882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-03 00:53:06.082890 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.082898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-03 00:53:06.082906 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.082914 | orchestrator | 2025-05-03 00:53:06.082922 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-03 00:53:06.082930 | orchestrator | Saturday 03 May 2025 00:47:50 +0000 (0:00:01.898) 0:02:01.797 ********** 2025-05-03 00:53:06.082938 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.082946 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.082954 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.082962 | orchestrator | 2025-05-03 00:53:06.082970 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-03 00:53:06.082978 | orchestrator | Saturday 03 May 2025 00:47:51 +0000 (0:00:01.353) 0:02:03.150 ********** 2025-05-03 00:53:06.082986 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.082998 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.083006 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.083014 | orchestrator | 2025-05-03 00:53:06.083023 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-03 00:53:06.083031 | orchestrator | Saturday 03 May 2025 00:47:54 +0000 (0:00:02.124) 0:02:05.275 ********** 2025-05-03 00:53:06.083039 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.083047 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.083055 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.083063 | orchestrator | 2025-05-03 00:53:06.083071 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-03 00:53:06.083083 | orchestrator | Saturday 03 May 2025 00:47:54 +0000 (0:00:00.484) 0:02:05.760 ********** 2025-05-03 00:53:06.083091 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.083099 | orchestrator | 2025-05-03 00:53:06.083107 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-03 00:53:06.083115 | orchestrator | Saturday 03 May 2025 00:47:55 +0000 (0:00:01.147) 0:02:06.907 ********** 2025-05-03 00:53:06.083124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 00:53:06.083140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.083159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 00:53:06.083175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 00:53:06.083198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.083208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.083222 | orchestrator | 2025-05-03 00:53:06.083235 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-03 00:53:06.083243 | orchestrator | Saturday 03 May 2025 00:48:00 +0000 (0:00:05.077) 0:02:11.985 ********** 2025-05-03 00:53:06.083257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 00:53:06.083265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.083274 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.083297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 00:53:06.083311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 00:53:06.083325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.083338 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.083357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.083367 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.083375 | orchestrator | 2025-05-03 00:53:06.083383 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-03 00:53:06.083394 | orchestrator | Saturday 03 May 2025 00:48:03 +0000 (0:00:02.927) 0:02:14.913 ********** 2025-05-03 00:53:06.083403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-03 00:53:06.083427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-03 00:53:06.083440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-03 00:53:06.083449 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.083463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-03 00:53:06.083471 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.083480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-03 00:53:06.083489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-03 00:53:06.083497 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.083505 | orchestrator | 2025-05-03 00:53:06.083513 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-03 00:53:06.083521 | orchestrator | Saturday 03 May 2025 00:48:08 +0000 (0:00:04.405) 0:02:19.318 ********** 2025-05-03 00:53:06.083529 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.083537 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.083545 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.083553 | orchestrator | 2025-05-03 00:53:06.083561 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-03 00:53:06.083569 | orchestrator | Saturday 03 May 2025 00:48:09 +0000 (0:00:01.236) 0:02:20.555 ********** 2025-05-03 00:53:06.083577 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.083585 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.083593 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.083601 | orchestrator | 2025-05-03 00:53:06.083609 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-03 00:53:06.083617 | orchestrator | Saturday 03 May 2025 00:48:11 +0000 (0:00:01.797) 0:02:22.353 ********** 2025-05-03 00:53:06.083629 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.083637 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.083645 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.083653 | orchestrator | 2025-05-03 00:53:06.083661 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-03 00:53:06.083669 | orchestrator | Saturday 03 May 2025 00:48:11 +0000 (0:00:00.369) 0:02:22.722 ********** 2025-05-03 00:53:06.083677 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.083685 | orchestrator | 2025-05-03 00:53:06.083693 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-03 00:53:06.083701 | orchestrator | Saturday 03 May 2025 00:48:12 +0000 (0:00:00.952) 0:02:23.675 ********** 2025-05-03 00:53:06.083710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 00:53:06.083719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 00:53:06.083732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 00:53:06.083741 | orchestrator | 2025-05-03 00:53:06.083749 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-03 00:53:06.083757 | orchestrator | Saturday 03 May 2025 00:48:17 +0000 (0:00:04.500) 0:02:28.175 ********** 2025-05-03 00:53:06.083765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 00:53:06.083774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 00:53:06.083786 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.083794 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.083809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 00:53:06.083818 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.083826 | orchestrator | 2025-05-03 00:53:06.083834 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-03 00:53:06.083842 | orchestrator | Saturday 03 May 2025 00:48:17 +0000 (0:00:00.481) 0:02:28.657 ********** 2025-05-03 00:53:06.083850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-03 00:53:06.083877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-03 00:53:06.083886 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.083895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-03 00:53:06.083903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-03 00:53:06.083911 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.083919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-03 00:53:06.083931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-03 00:53:06.085542 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.085678 | orchestrator | 2025-05-03 00:53:06.085703 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-03 00:53:06.085723 | orchestrator | Saturday 03 May 2025 00:48:18 +0000 (0:00:01.124) 0:02:29.781 ********** 2025-05-03 00:53:06.085737 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.085751 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.085765 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.085780 | orchestrator | 2025-05-03 00:53:06.085794 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-03 00:53:06.085809 | orchestrator | Saturday 03 May 2025 00:48:19 +0000 (0:00:01.191) 0:02:30.972 ********** 2025-05-03 00:53:06.085823 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.085837 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.085874 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.085888 | orchestrator | 2025-05-03 00:53:06.085903 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-03 00:53:06.085917 | orchestrator | Saturday 03 May 2025 00:48:22 +0000 (0:00:02.928) 0:02:33.901 ********** 2025-05-03 00:53:06.085931 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.085946 | orchestrator | 2025-05-03 00:53:06.085960 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-03 00:53:06.085974 | orchestrator | Saturday 03 May 2025 00:48:24 +0000 (0:00:01.541) 0:02:35.443 ********** 2025-05-03 00:53:06.085992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.086012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.086090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.086135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.086186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.086203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.086219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.086234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.086248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.086263 | orchestrator | 2025-05-03 00:53:06.086287 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-03 00:53:06.086310 | orchestrator | Saturday 03 May 2025 00:48:32 +0000 (0:00:07.846) 0:02:43.290 ********** 2025-05-03 00:53:06.086333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.086350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.086365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.086379 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.086394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.086485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.086521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.086537 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.086552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.086567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.086581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.086596 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.086610 | orchestrator | 2025-05-03 00:53:06.086625 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-03 00:53:06.086639 | orchestrator | Saturday 03 May 2025 00:48:33 +0000 (0:00:01.046) 0:02:44.336 ********** 2025-05-03 00:53:06.086653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086731 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.086746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086808 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.086821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-03 00:53:06.086871 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.086889 | orchestrator | 2025-05-03 00:53:06.086902 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-03 00:53:06.086915 | orchestrator | Saturday 03 May 2025 00:48:34 +0000 (0:00:01.381) 0:02:45.718 ********** 2025-05-03 00:53:06.086927 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.086940 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.086952 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.086979 | orchestrator | 2025-05-03 00:53:06.086992 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-03 00:53:06.087016 | orchestrator | Saturday 03 May 2025 00:48:35 +0000 (0:00:01.394) 0:02:47.112 ********** 2025-05-03 00:53:06.087029 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.087041 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.087054 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.087066 | orchestrator | 2025-05-03 00:53:06.087083 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-03 00:53:06.087096 | orchestrator | Saturday 03 May 2025 00:48:38 +0000 (0:00:02.193) 0:02:49.306 ********** 2025-05-03 00:53:06.087108 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.087127 | orchestrator | 2025-05-03 00:53:06.087139 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-03 00:53:06.087151 | orchestrator | Saturday 03 May 2025 00:48:39 +0000 (0:00:01.041) 0:02:50.348 ********** 2025-05-03 00:53:06.087181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:53:06.087198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:53:06.087239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:53:06.087262 | orchestrator | 2025-05-03 00:53:06.087276 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-03 00:53:06.087288 | orchestrator | Saturday 03 May 2025 00:48:43 +0000 (0:00:04.193) 0:02:54.541 ********** 2025-05-03 00:53:06.087301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:53:06.087320 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.087348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:53:06.087364 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.087377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:53:06.087404 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.087433 | orchestrator | 2025-05-03 00:53:06.087452 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-03 00:53:06.087466 | orchestrator | Saturday 03 May 2025 00:48:44 +0000 (0:00:00.886) 0:02:55.427 ********** 2025-05-03 00:53:06.087481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-03 00:53:06.087495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-03 00:53:06.087510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-03 00:53:06.087523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-03 00:53:06.087537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-03 00:53:06.087552 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.087571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-03 00:53:06.087591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-03 00:53:06.087604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-03 00:53:06.087617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-03 00:53:06.087630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-03 00:53:06.087643 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.087656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-03 00:53:06.087668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-03 00:53:06.087687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-03 00:53:06.087700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-03 00:53:06.087713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-03 00:53:06.087725 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.087738 | orchestrator | 2025-05-03 00:53:06.087751 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-03 00:53:06.087763 | orchestrator | Saturday 03 May 2025 00:48:45 +0000 (0:00:01.504) 0:02:56.932 ********** 2025-05-03 00:53:06.087775 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.087788 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.087800 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.087813 | orchestrator | 2025-05-03 00:53:06.087825 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-03 00:53:06.087837 | orchestrator | Saturday 03 May 2025 00:48:47 +0000 (0:00:01.549) 0:02:58.481 ********** 2025-05-03 00:53:06.087850 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.087862 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.087875 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.087887 | orchestrator | 2025-05-03 00:53:06.087899 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-03 00:53:06.087917 | orchestrator | Saturday 03 May 2025 00:48:49 +0000 (0:00:02.519) 0:03:01.001 ********** 2025-05-03 00:53:06.087929 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.087942 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.087954 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.087967 | orchestrator | 2025-05-03 00:53:06.087979 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-03 00:53:06.087992 | orchestrator | Saturday 03 May 2025 00:48:50 +0000 (0:00:00.462) 0:03:01.464 ********** 2025-05-03 00:53:06.088004 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.088017 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.088029 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.088041 | orchestrator | 2025-05-03 00:53:06.088054 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-03 00:53:06.088066 | orchestrator | Saturday 03 May 2025 00:48:50 +0000 (0:00:00.286) 0:03:01.750 ********** 2025-05-03 00:53:06.088079 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.088091 | orchestrator | 2025-05-03 00:53:06.088104 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-03 00:53:06.088116 | orchestrator | Saturday 03 May 2025 00:48:51 +0000 (0:00:01.277) 0:03:03.028 ********** 2025-05-03 00:53:06.088129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:53:06.088144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:53:06.088165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:53:06.088180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:53:06.088200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:53:06.088214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:53:06.088227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:53:06.088247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:53:06.088260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:53:06.088279 | orchestrator | 2025-05-03 00:53:06.088292 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-03 00:53:06.088305 | orchestrator | Saturday 03 May 2025 00:48:56 +0000 (0:00:04.686) 0:03:07.715 ********** 2025-05-03 00:53:06.088318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:53:06.088332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:53:06.088353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:53:06.088367 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.088387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:53:06.088401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:53:06.088439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:53:06.088453 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.088466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:53:06.088488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:53:06.088502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:53:06.088515 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.088528 | orchestrator | 2025-05-03 00:53:06.088619 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-03 00:53:06.088633 | orchestrator | Saturday 03 May 2025 00:48:57 +0000 (0:00:01.073) 0:03:08.788 ********** 2025-05-03 00:53:06.088652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-03 00:53:06.088675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-03 00:53:06.088688 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.088701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-03 00:53:06.088714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-03 00:53:06.088727 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.088740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-03 00:53:06.088753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-03 00:53:06.088766 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.088779 | orchestrator | 2025-05-03 00:53:06.088791 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-03 00:53:06.088804 | orchestrator | Saturday 03 May 2025 00:48:58 +0000 (0:00:01.045) 0:03:09.833 ********** 2025-05-03 00:53:06.088816 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.088829 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.088841 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.088854 | orchestrator | 2025-05-03 00:53:06.088866 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-03 00:53:06.088879 | orchestrator | Saturday 03 May 2025 00:49:00 +0000 (0:00:01.410) 0:03:11.243 ********** 2025-05-03 00:53:06.088891 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.088904 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.088916 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.088928 | orchestrator | 2025-05-03 00:53:06.088941 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-03 00:53:06.088954 | orchestrator | Saturday 03 May 2025 00:49:02 +0000 (0:00:02.515) 0:03:13.759 ********** 2025-05-03 00:53:06.088966 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.088979 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.088991 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.089003 | orchestrator | 2025-05-03 00:53:06.089021 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-03 00:53:06.089034 | orchestrator | Saturday 03 May 2025 00:49:02 +0000 (0:00:00.289) 0:03:14.049 ********** 2025-05-03 00:53:06.089047 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.089059 | orchestrator | 2025-05-03 00:53:06.089071 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-03 00:53:06.089084 | orchestrator | Saturday 03 May 2025 00:49:04 +0000 (0:00:01.476) 0:03:15.525 ********** 2025-05-03 00:53:06.089097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 00:53:06.089122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.089138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 00:53:06.089151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.089165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 00:53:06.089186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.089199 | orchestrator | 2025-05-03 00:53:06.089212 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-03 00:53:06.089225 | orchestrator | Saturday 03 May 2025 00:49:09 +0000 (0:00:04.790) 0:03:20.316 ********** 2025-05-03 00:53:06.089244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 00:53:06.089258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.089271 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.089285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 00:53:06.089298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.089316 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.089335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 00:53:06.089494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.089516 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.089529 | orchestrator | 2025-05-03 00:53:06.089541 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-03 00:53:06.089554 | orchestrator | Saturday 03 May 2025 00:49:10 +0000 (0:00:01.117) 0:03:21.433 ********** 2025-05-03 00:53:06.089567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-03 00:53:06.089581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-03 00:53:06.089600 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.089613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-03 00:53:06.089626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-03 00:53:06.089638 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.089651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-03 00:53:06.089664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-03 00:53:06.089684 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.089697 | orchestrator | 2025-05-03 00:53:06.089709 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-03 00:53:06.089722 | orchestrator | Saturday 03 May 2025 00:49:11 +0000 (0:00:01.360) 0:03:22.793 ********** 2025-05-03 00:53:06.089734 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.089746 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.089759 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.089771 | orchestrator | 2025-05-03 00:53:06.089783 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-03 00:53:06.089796 | orchestrator | Saturday 03 May 2025 00:49:13 +0000 (0:00:01.434) 0:03:24.228 ********** 2025-05-03 00:53:06.089808 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.089820 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.089832 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.089845 | orchestrator | 2025-05-03 00:53:06.089858 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-03 00:53:06.089870 | orchestrator | Saturday 03 May 2025 00:49:15 +0000 (0:00:02.474) 0:03:26.703 ********** 2025-05-03 00:53:06.089883 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.089895 | orchestrator | 2025-05-03 00:53:06.089908 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-03 00:53:06.089920 | orchestrator | Saturday 03 May 2025 00:49:16 +0000 (0:00:01.149) 0:03:27.852 ********** 2025-05-03 00:53:06.090002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-03 00:53:06.090049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-03 00:53:06.090112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-03 00:53:06.090248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090288 | orchestrator | 2025-05-03 00:53:06.090301 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-03 00:53:06.090315 | orchestrator | Saturday 03 May 2025 00:49:21 +0000 (0:00:04.407) 0:03:32.260 ********** 2025-05-03 00:53:06.090389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-03 00:53:06.090462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090513 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.090536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-03 00:53:06.090550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090669 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.090683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-03 00:53:06.090704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.090743 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.090754 | orchestrator | 2025-05-03 00:53:06.090765 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-03 00:53:06.090775 | orchestrator | Saturday 03 May 2025 00:49:21 +0000 (0:00:00.759) 0:03:33.019 ********** 2025-05-03 00:53:06.090786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-03 00:53:06.090853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-03 00:53:06.090868 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.090879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-03 00:53:06.090889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-03 00:53:06.090899 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.090916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-03 00:53:06.090926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-03 00:53:06.090937 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.090947 | orchestrator | 2025-05-03 00:53:06.090957 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-03 00:53:06.090967 | orchestrator | Saturday 03 May 2025 00:49:22 +0000 (0:00:00.939) 0:03:33.959 ********** 2025-05-03 00:53:06.090978 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.090988 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.090998 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.091008 | orchestrator | 2025-05-03 00:53:06.091018 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-03 00:53:06.091028 | orchestrator | Saturday 03 May 2025 00:49:24 +0000 (0:00:01.241) 0:03:35.200 ********** 2025-05-03 00:53:06.091038 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.091049 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.091059 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.091069 | orchestrator | 2025-05-03 00:53:06.091079 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-03 00:53:06.091089 | orchestrator | Saturday 03 May 2025 00:49:26 +0000 (0:00:02.374) 0:03:37.575 ********** 2025-05-03 00:53:06.091100 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.091110 | orchestrator | 2025-05-03 00:53:06.091120 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-03 00:53:06.091130 | orchestrator | Saturday 03 May 2025 00:49:28 +0000 (0:00:01.700) 0:03:39.276 ********** 2025-05-03 00:53:06.091141 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:53:06.091151 | orchestrator | 2025-05-03 00:53:06.091161 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-03 00:53:06.091171 | orchestrator | Saturday 03 May 2025 00:49:31 +0000 (0:00:03.733) 0:03:43.009 ********** 2025-05-03 00:53:06.091183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-03 00:53:06.091261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-03 00:53:06.091277 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.091288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-03 00:53:06.091300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-03 00:53:06.091311 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.091375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-03 00:53:06.091398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-03 00:53:06.091425 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.091436 | orchestrator | 2025-05-03 00:53:06.091447 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-03 00:53:06.091457 | orchestrator | Saturday 03 May 2025 00:49:35 +0000 (0:00:03.505) 0:03:46.515 ********** 2025-05-03 00:53:06.091468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-03 00:53:06.091541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-03 00:53:06.091556 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.091567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-03 00:53:06.091579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-03 00:53:06.091590 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.091653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-03 00:53:06.091676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-03 00:53:06.091687 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.091697 | orchestrator | 2025-05-03 00:53:06.091708 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-03 00:53:06.091718 | orchestrator | Saturday 03 May 2025 00:49:37 +0000 (0:00:02.563) 0:03:49.078 ********** 2025-05-03 00:53:06.091728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-03 00:53:06.091740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-03 00:53:06.091750 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.091761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-03 00:53:06.091771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-03 00:53:06.091792 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.091855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-03 00:53:06.091870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-03 00:53:06.091881 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.091892 | orchestrator | 2025-05-03 00:53:06.091902 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-03 00:53:06.091913 | orchestrator | Saturday 03 May 2025 00:49:40 +0000 (0:00:02.392) 0:03:51.471 ********** 2025-05-03 00:53:06.091923 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.091933 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.091943 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.091954 | orchestrator | 2025-05-03 00:53:06.091964 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-03 00:53:06.091974 | orchestrator | Saturday 03 May 2025 00:49:42 +0000 (0:00:01.887) 0:03:53.359 ********** 2025-05-03 00:53:06.091984 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.091994 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.092004 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.092015 | orchestrator | 2025-05-03 00:53:06.092025 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-03 00:53:06.092035 | orchestrator | Saturday 03 May 2025 00:49:43 +0000 (0:00:01.408) 0:03:54.767 ********** 2025-05-03 00:53:06.092045 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.092055 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.092065 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.092075 | orchestrator | 2025-05-03 00:53:06.092086 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-03 00:53:06.092096 | orchestrator | Saturday 03 May 2025 00:49:43 +0000 (0:00:00.250) 0:03:55.017 ********** 2025-05-03 00:53:06.092106 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.092116 | orchestrator | 2025-05-03 00:53:06.092126 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-03 00:53:06.092136 | orchestrator | Saturday 03 May 2025 00:49:45 +0000 (0:00:01.267) 0:03:56.285 ********** 2025-05-03 00:53:06.092147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-03 00:53:06.092168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-03 00:53:06.092247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-03 00:53:06.092263 | orchestrator | 2025-05-03 00:53:06.092273 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-03 00:53:06.092284 | orchestrator | Saturday 03 May 2025 00:49:46 +0000 (0:00:01.706) 0:03:57.991 ********** 2025-05-03 00:53:06.092313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-03 00:53:06.092325 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.092336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-03 00:53:06.092352 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.092363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-03 00:53:06.092374 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.092384 | orchestrator | 2025-05-03 00:53:06.092394 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-03 00:53:06.092418 | orchestrator | Saturday 03 May 2025 00:49:47 +0000 (0:00:00.614) 0:03:58.606 ********** 2025-05-03 00:53:06.092429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-03 00:53:06.092440 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.092451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-03 00:53:06.092461 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.092472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-03 00:53:06.092482 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.092492 | orchestrator | 2025-05-03 00:53:06.092560 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-03 00:53:06.092574 | orchestrator | Saturday 03 May 2025 00:49:48 +0000 (0:00:00.799) 0:03:59.405 ********** 2025-05-03 00:53:06.092585 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.092595 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.092605 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.092615 | orchestrator | 2025-05-03 00:53:06.092626 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-03 00:53:06.092636 | orchestrator | Saturday 03 May 2025 00:49:48 +0000 (0:00:00.713) 0:04:00.119 ********** 2025-05-03 00:53:06.092646 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.092656 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.092666 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.092677 | orchestrator | 2025-05-03 00:53:06.092687 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-03 00:53:06.092697 | orchestrator | Saturday 03 May 2025 00:49:50 +0000 (0:00:01.939) 0:04:02.058 ********** 2025-05-03 00:53:06.092707 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.092717 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.092728 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.092738 | orchestrator | 2025-05-03 00:53:06.092749 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-03 00:53:06.092759 | orchestrator | Saturday 03 May 2025 00:49:51 +0000 (0:00:00.312) 0:04:02.370 ********** 2025-05-03 00:53:06.092769 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.092786 | orchestrator | 2025-05-03 00:53:06.092797 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-03 00:53:06.092807 | orchestrator | Saturday 03 May 2025 00:49:52 +0000 (0:00:01.559) 0:04:03.930 ********** 2025-05-03 00:53:06.092818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 00:53:06.092830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.092841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.092913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.092929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 00:53:06.092946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.092961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.092973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 00:53:06.092984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.093052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.093106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 00:53:06.093215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.093227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.093249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.093260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.093346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.093368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.093398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.093459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.093573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.093583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.093615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.093681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 00:53:06.093722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 00:53:06.093825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.093860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.093871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.093893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.093968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.093982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.093991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.094014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.094047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094061 | orchestrator | 2025-05-03 00:53:06.094071 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-03 00:53:06.094079 | orchestrator | Saturday 03 May 2025 00:49:58 +0000 (0:00:05.757) 0:04:09.687 ********** 2025-05-03 00:53:06.094151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 00:53:06.094165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 00:53:06.094258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 00:53:06.094278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.094379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 00:53:06.094398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.094426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.094455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.094518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.094531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.094569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.094578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.094592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.094676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.094685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.094694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.094714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.094794 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.094804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.094813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094827 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.094837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 00:53:06.094905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 00:53:06.094952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.094961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.095019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.095039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.095049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.095058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.095073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.095082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 00:53:06.095091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.095156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 00:53:06.095169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 00:53:06.095179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.095193 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.095202 | orchestrator | 2025-05-03 00:53:06.095211 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-03 00:53:06.095224 | orchestrator | Saturday 03 May 2025 00:50:00 +0000 (0:00:01.769) 0:04:11.457 ********** 2025-05-03 00:53:06.095233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-03 00:53:06.095242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-03 00:53:06.095251 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.095263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-03 00:53:06.095272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-03 00:53:06.095281 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.095289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-03 00:53:06.095298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-03 00:53:06.095307 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.095320 | orchestrator | 2025-05-03 00:53:06.095328 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-03 00:53:06.095337 | orchestrator | Saturday 03 May 2025 00:50:02 +0000 (0:00:02.031) 0:04:13.489 ********** 2025-05-03 00:53:06.095346 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.095354 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.095383 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.095393 | orchestrator | 2025-05-03 00:53:06.095402 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-03 00:53:06.095427 | orchestrator | Saturday 03 May 2025 00:50:03 +0000 (0:00:01.459) 0:04:14.948 ********** 2025-05-03 00:53:06.095436 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.095444 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.095453 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.095462 | orchestrator | 2025-05-03 00:53:06.095470 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-03 00:53:06.095479 | orchestrator | Saturday 03 May 2025 00:50:06 +0000 (0:00:02.520) 0:04:17.469 ********** 2025-05-03 00:53:06.095487 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.095496 | orchestrator | 2025-05-03 00:53:06.095505 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-03 00:53:06.095513 | orchestrator | Saturday 03 May 2025 00:50:07 +0000 (0:00:01.597) 0:04:19.066 ********** 2025-05-03 00:53:06.095522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.095547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.095557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.095566 | orchestrator | 2025-05-03 00:53:06.095575 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-03 00:53:06.095584 | orchestrator | Saturday 03 May 2025 00:50:11 +0000 (0:00:04.010) 0:04:23.076 ********** 2025-05-03 00:53:06.095613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.095623 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.095633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.095647 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.095662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.095672 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.095681 | orchestrator | 2025-05-03 00:53:06.095689 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-03 00:53:06.095698 | orchestrator | Saturday 03 May 2025 00:50:12 +0000 (0:00:00.523) 0:04:23.600 ********** 2025-05-03 00:53:06.095707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-03 00:53:06.095716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-03 00:53:06.095725 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.095734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-03 00:53:06.095743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-03 00:53:06.095752 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.095761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-03 00:53:06.095769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-03 00:53:06.095778 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.095787 | orchestrator | 2025-05-03 00:53:06.095796 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-03 00:53:06.095823 | orchestrator | Saturday 03 May 2025 00:50:13 +0000 (0:00:01.225) 0:04:24.826 ********** 2025-05-03 00:53:06.095833 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.095842 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.095856 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.095865 | orchestrator | 2025-05-03 00:53:06.095873 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-03 00:53:06.095882 | orchestrator | Saturday 03 May 2025 00:50:15 +0000 (0:00:01.427) 0:04:26.253 ********** 2025-05-03 00:53:06.095891 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.095899 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.095908 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.095917 | orchestrator | 2025-05-03 00:53:06.095925 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-03 00:53:06.095934 | orchestrator | Saturday 03 May 2025 00:50:17 +0000 (0:00:02.260) 0:04:28.514 ********** 2025-05-03 00:53:06.095943 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.095952 | orchestrator | 2025-05-03 00:53:06.095960 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-03 00:53:06.095969 | orchestrator | Saturday 03 May 2025 00:50:19 +0000 (0:00:01.825) 0:04:30.340 ********** 2025-05-03 00:53:06.095978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.095987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.095997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.096046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.096056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096126 | orchestrator | 2025-05-03 00:53:06.096135 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-03 00:53:06.096144 | orchestrator | Saturday 03 May 2025 00:50:24 +0000 (0:00:05.357) 0:04:35.698 ********** 2025-05-03 00:53:06.096153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.096163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096181 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.096190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.096232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096253 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.096262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.096271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.096295 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.096303 | orchestrator | 2025-05-03 00:53:06.096312 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-03 00:53:06.096321 | orchestrator | Saturday 03 May 2025 00:50:25 +0000 (0:00:01.191) 0:04:36.889 ********** 2025-05-03 00:53:06.096329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096385 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.096394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096444 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.096453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-03 00:53:06.096488 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.096496 | orchestrator | 2025-05-03 00:53:06.096505 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-03 00:53:06.096520 | orchestrator | Saturday 03 May 2025 00:50:27 +0000 (0:00:01.394) 0:04:38.283 ********** 2025-05-03 00:53:06.096529 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.096537 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.096546 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.096554 | orchestrator | 2025-05-03 00:53:06.096563 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-03 00:53:06.096571 | orchestrator | Saturday 03 May 2025 00:50:28 +0000 (0:00:01.401) 0:04:39.685 ********** 2025-05-03 00:53:06.096580 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.096589 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.096597 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.096606 | orchestrator | 2025-05-03 00:53:06.096614 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-03 00:53:06.096623 | orchestrator | Saturday 03 May 2025 00:50:30 +0000 (0:00:02.439) 0:04:42.124 ********** 2025-05-03 00:53:06.096632 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.096640 | orchestrator | 2025-05-03 00:53:06.096653 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-03 00:53:06.096662 | orchestrator | Saturday 03 May 2025 00:50:32 +0000 (0:00:01.703) 0:04:43.828 ********** 2025-05-03 00:53:06.096671 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-03 00:53:06.096680 | orchestrator | 2025-05-03 00:53:06.096688 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-03 00:53:06.096697 | orchestrator | Saturday 03 May 2025 00:50:34 +0000 (0:00:01.384) 0:04:45.213 ********** 2025-05-03 00:53:06.096725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-03 00:53:06.096743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-03 00:53:06.096753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-03 00:53:06.096762 | orchestrator | 2025-05-03 00:53:06.096771 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-03 00:53:06.096780 | orchestrator | Saturday 03 May 2025 00:50:38 +0000 (0:00:04.853) 0:04:50.066 ********** 2025-05-03 00:53:06.096789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.096803 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.096812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.096821 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.096830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.096839 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.096848 | orchestrator | 2025-05-03 00:53:06.096857 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-03 00:53:06.096865 | orchestrator | Saturday 03 May 2025 00:50:40 +0000 (0:00:01.413) 0:04:51.480 ********** 2025-05-03 00:53:06.096874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-03 00:53:06.096883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-03 00:53:06.096892 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.096901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-03 00:53:06.096932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-03 00:53:06.096942 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.096951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-03 00:53:06.096960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-03 00:53:06.096969 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.096978 | orchestrator | 2025-05-03 00:53:06.096987 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-03 00:53:06.096995 | orchestrator | Saturday 03 May 2025 00:50:41 +0000 (0:00:01.498) 0:04:52.979 ********** 2025-05-03 00:53:06.097004 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.097012 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.097021 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.097030 | orchestrator | 2025-05-03 00:53:06.097038 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-03 00:53:06.097055 | orchestrator | Saturday 03 May 2025 00:50:44 +0000 (0:00:02.480) 0:04:55.459 ********** 2025-05-03 00:53:06.097063 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.097072 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.097081 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.097089 | orchestrator | 2025-05-03 00:53:06.097098 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-03 00:53:06.097107 | orchestrator | Saturday 03 May 2025 00:50:47 +0000 (0:00:03.038) 0:04:58.498 ********** 2025-05-03 00:53:06.097120 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-03 00:53:06.097129 | orchestrator | 2025-05-03 00:53:06.097137 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-03 00:53:06.097147 | orchestrator | Saturday 03 May 2025 00:50:48 +0000 (0:00:01.407) 0:04:59.905 ********** 2025-05-03 00:53:06.097156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.097165 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.097180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.097189 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.097199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.097208 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.097216 | orchestrator | 2025-05-03 00:53:06.097225 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-03 00:53:06.097234 | orchestrator | Saturday 03 May 2025 00:50:50 +0000 (0:00:01.575) 0:05:01.481 ********** 2025-05-03 00:53:06.097262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.097272 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.097281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.097295 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.097304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-03 00:53:06.097313 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.097322 | orchestrator | 2025-05-03 00:53:06.097331 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-03 00:53:06.097339 | orchestrator | Saturday 03 May 2025 00:50:52 +0000 (0:00:01.931) 0:05:03.412 ********** 2025-05-03 00:53:06.097348 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.097357 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.097365 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.097378 | orchestrator | 2025-05-03 00:53:06.097387 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-03 00:53:06.097396 | orchestrator | Saturday 03 May 2025 00:50:54 +0000 (0:00:01.920) 0:05:05.333 ********** 2025-05-03 00:53:06.097417 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.097427 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.097436 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.097445 | orchestrator | 2025-05-03 00:53:06.097453 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-03 00:53:06.097462 | orchestrator | Saturday 03 May 2025 00:50:57 +0000 (0:00:02.998) 0:05:08.332 ********** 2025-05-03 00:53:06.097471 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.097480 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.097488 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.097497 | orchestrator | 2025-05-03 00:53:06.097506 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-03 00:53:06.097514 | orchestrator | Saturday 03 May 2025 00:51:00 +0000 (0:00:02.843) 0:05:11.175 ********** 2025-05-03 00:53:06.097523 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-03 00:53:06.097532 | orchestrator | 2025-05-03 00:53:06.097540 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-03 00:53:06.097549 | orchestrator | Saturday 03 May 2025 00:51:01 +0000 (0:00:01.154) 0:05:12.330 ********** 2025-05-03 00:53:06.097558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-03 00:53:06.097567 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.097582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-03 00:53:06.097599 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.097627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-03 00:53:06.097637 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.097646 | orchestrator | 2025-05-03 00:53:06.097655 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-03 00:53:06.097664 | orchestrator | Saturday 03 May 2025 00:51:02 +0000 (0:00:01.642) 0:05:13.972 ********** 2025-05-03 00:53:06.097673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-03 00:53:06.097681 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.097690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-03 00:53:06.097699 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.097708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-03 00:53:06.097717 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.097725 | orchestrator | 2025-05-03 00:53:06.097734 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-03 00:53:06.097743 | orchestrator | Saturday 03 May 2025 00:51:04 +0000 (0:00:01.567) 0:05:15.540 ********** 2025-05-03 00:53:06.097751 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.097760 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.097769 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.097777 | orchestrator | 2025-05-03 00:53:06.097786 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-03 00:53:06.097795 | orchestrator | Saturday 03 May 2025 00:51:06 +0000 (0:00:02.146) 0:05:17.686 ********** 2025-05-03 00:53:06.097803 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.097812 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.097820 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.097829 | orchestrator | 2025-05-03 00:53:06.097838 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-03 00:53:06.097851 | orchestrator | Saturday 03 May 2025 00:51:09 +0000 (0:00:02.961) 0:05:20.647 ********** 2025-05-03 00:53:06.097859 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.097873 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.097882 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.097891 | orchestrator | 2025-05-03 00:53:06.097899 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-03 00:53:06.097908 | orchestrator | Saturday 03 May 2025 00:51:13 +0000 (0:00:03.636) 0:05:24.284 ********** 2025-05-03 00:53:06.097916 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.097925 | orchestrator | 2025-05-03 00:53:06.097933 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-03 00:53:06.097942 | orchestrator | Saturday 03 May 2025 00:51:14 +0000 (0:00:01.744) 0:05:26.029 ********** 2025-05-03 00:53:06.097969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.097979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-03 00:53:06.097995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.098053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.098069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-03 00:53:06.098100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.098130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.098147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-03 00:53:06.098160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.098209 | orchestrator | 2025-05-03 00:53:06.098218 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-03 00:53:06.098227 | orchestrator | Saturday 03 May 2025 00:51:19 +0000 (0:00:04.773) 0:05:30.803 ********** 2025-05-03 00:53:06.098236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.098251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-03 00:53:06.098268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.098316 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.098325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.098334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-03 00:53:06.098348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.098382 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.098459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.098473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-03 00:53:06.098482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-03 00:53:06.098507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-03 00:53:06.098524 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.098533 | orchestrator | 2025-05-03 00:53:06.098542 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-03 00:53:06.098551 | orchestrator | Saturday 03 May 2025 00:51:20 +0000 (0:00:00.990) 0:05:31.793 ********** 2025-05-03 00:53:06.098560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-03 00:53:06.098568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-03 00:53:06.098579 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.098587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-03 00:53:06.098596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-03 00:53:06.098605 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.098635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-03 00:53:06.098645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-03 00:53:06.098654 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.098663 | orchestrator | 2025-05-03 00:53:06.098672 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-03 00:53:06.098680 | orchestrator | Saturday 03 May 2025 00:51:22 +0000 (0:00:01.368) 0:05:33.162 ********** 2025-05-03 00:53:06.098689 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.098698 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.098706 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.098715 | orchestrator | 2025-05-03 00:53:06.098724 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-03 00:53:06.098733 | orchestrator | Saturday 03 May 2025 00:51:23 +0000 (0:00:01.466) 0:05:34.628 ********** 2025-05-03 00:53:06.098746 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.098755 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.098764 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.098772 | orchestrator | 2025-05-03 00:53:06.098781 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-03 00:53:06.098790 | orchestrator | Saturday 03 May 2025 00:51:26 +0000 (0:00:02.730) 0:05:37.359 ********** 2025-05-03 00:53:06.098798 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.098807 | orchestrator | 2025-05-03 00:53:06.098816 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-03 00:53:06.098824 | orchestrator | Saturday 03 May 2025 00:51:28 +0000 (0:00:01.844) 0:05:39.204 ********** 2025-05-03 00:53:06.098834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:53:06.098844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:53:06.098853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:53:06.098889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:53:06.098905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:53:06.098915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:53:06.098924 | orchestrator | 2025-05-03 00:53:06.098933 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-03 00:53:06.098942 | orchestrator | Saturday 03 May 2025 00:51:34 +0000 (0:00:06.374) 0:05:45.578 ********** 2025-05-03 00:53:06.098971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:53:06.098988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:53:06.099002 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.099011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:53:06.099019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:53:06.099028 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.099036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:53:06.099069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:53:06.099084 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.099092 | orchestrator | 2025-05-03 00:53:06.099100 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-03 00:53:06.099109 | orchestrator | Saturday 03 May 2025 00:51:35 +0000 (0:00:00.908) 0:05:46.487 ********** 2025-05-03 00:53:06.099117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-03 00:53:06.099125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-03 00:53:06.099134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-03 00:53:06.099142 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.099151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-03 00:53:06.099159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-03 00:53:06.099167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-03 00:53:06.099175 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.099188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-03 00:53:06.099196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-03 00:53:06.099204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-03 00:53:06.099212 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.099220 | orchestrator | 2025-05-03 00:53:06.099228 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-03 00:53:06.099236 | orchestrator | Saturday 03 May 2025 00:51:36 +0000 (0:00:01.407) 0:05:47.895 ********** 2025-05-03 00:53:06.099244 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.099257 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.099265 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.099273 | orchestrator | 2025-05-03 00:53:06.099281 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-03 00:53:06.099289 | orchestrator | Saturday 03 May 2025 00:51:37 +0000 (0:00:00.438) 0:05:48.333 ********** 2025-05-03 00:53:06.099297 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.099305 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.099313 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.099321 | orchestrator | 2025-05-03 00:53:06.099329 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-03 00:53:06.099337 | orchestrator | Saturday 03 May 2025 00:51:38 +0000 (0:00:01.684) 0:05:50.018 ********** 2025-05-03 00:53:06.099363 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.099373 | orchestrator | 2025-05-03 00:53:06.099381 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-03 00:53:06.099389 | orchestrator | Saturday 03 May 2025 00:51:40 +0000 (0:00:01.857) 0:05:51.876 ********** 2025-05-03 00:53:06.099397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-03 00:53:06.099420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 00:53:06.099429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.099468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-03 00:53:06.099500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 00:53:06.099511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.099536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-03 00:53:06.099545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 00:53:06.099563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.099608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-03 00:53:06.099616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 00:53:06.099629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.099681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-03 00:53:06.099698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 00:53:06.099711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.099746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-03 00:53:06.099763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 00:53:06.099776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.099812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099821 | orchestrator | 2025-05-03 00:53:06.099829 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-03 00:53:06.099837 | orchestrator | Saturday 03 May 2025 00:51:45 +0000 (0:00:04.803) 0:05:56.680 ********** 2025-05-03 00:53:06.099846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 00:53:06.099854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 00:53:06.099862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.099903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 00:53:06.099911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 00:53:06.099920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 00:53:06.099957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.099968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 00:53:06.099977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.099985 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.099994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.100033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 00:53:06.100045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 00:53:06.100054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 00:53:06.100090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.100098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 00:53:06.100106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100123 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.100135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.100157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 00:53:06.100171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 00:53:06.100179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 00:53:06.100209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 00:53:06.100217 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.100225 | orchestrator | 2025-05-03 00:53:06.100238 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-03 00:53:06.100246 | orchestrator | Saturday 03 May 2025 00:51:46 +0000 (0:00:01.212) 0:05:57.892 ********** 2025-05-03 00:53:06.100254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-03 00:53:06.100262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-03 00:53:06.100271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-03 00:53:06.100279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-03 00:53:06.100287 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.100296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-03 00:53:06.100304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-03 00:53:06.100312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-03 00:53:06.100321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-03 00:53:06.100329 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.100337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-03 00:53:06.100349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-03 00:53:06.100358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-03 00:53:06.100369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-03 00:53:06.100463 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.100476 | orchestrator | 2025-05-03 00:53:06.100485 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-03 00:53:06.100494 | orchestrator | Saturday 03 May 2025 00:51:48 +0000 (0:00:01.696) 0:05:59.588 ********** 2025-05-03 00:53:06.100506 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.100515 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.100522 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.100530 | orchestrator | 2025-05-03 00:53:06.100538 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-03 00:53:06.100546 | orchestrator | Saturday 03 May 2025 00:51:49 +0000 (0:00:00.756) 0:06:00.345 ********** 2025-05-03 00:53:06.100554 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.100562 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.100570 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.100578 | orchestrator | 2025-05-03 00:53:06.100586 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-03 00:53:06.100594 | orchestrator | Saturday 03 May 2025 00:51:51 +0000 (0:00:02.027) 0:06:02.372 ********** 2025-05-03 00:53:06.100602 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.100610 | orchestrator | 2025-05-03 00:53:06.100618 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-03 00:53:06.100625 | orchestrator | Saturday 03 May 2025 00:51:53 +0000 (0:00:01.873) 0:06:04.246 ********** 2025-05-03 00:53:06.100634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:53:06.100643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:53:06.100656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-03 00:53:06.100670 | orchestrator | 2025-05-03 00:53:06.100678 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-03 00:53:06.100686 | orchestrator | Saturday 03 May 2025 00:51:56 +0000 (0:00:03.069) 0:06:07.315 ********** 2025-05-03 00:53:06.100694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-03 00:53:06.100703 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.100711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-03 00:53:06.100720 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.100728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-03 00:53:06.100737 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.100745 | orchestrator | 2025-05-03 00:53:06.100753 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-03 00:53:06.100761 | orchestrator | Saturday 03 May 2025 00:51:56 +0000 (0:00:00.398) 0:06:07.713 ********** 2025-05-03 00:53:06.100769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-03 00:53:06.100781 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.100790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-03 00:53:06.100798 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.100809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-03 00:53:06.100817 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.100825 | orchestrator | 2025-05-03 00:53:06.100833 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-03 00:53:06.100841 | orchestrator | Saturday 03 May 2025 00:51:57 +0000 (0:00:01.139) 0:06:08.853 ********** 2025-05-03 00:53:06.100849 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.100857 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.100865 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.100873 | orchestrator | 2025-05-03 00:53:06.100881 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-03 00:53:06.100889 | orchestrator | Saturday 03 May 2025 00:51:58 +0000 (0:00:00.443) 0:06:09.297 ********** 2025-05-03 00:53:06.100897 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.100905 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.100913 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.100921 | orchestrator | 2025-05-03 00:53:06.100929 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-03 00:53:06.100937 | orchestrator | Saturday 03 May 2025 00:51:59 +0000 (0:00:01.815) 0:06:11.112 ********** 2025-05-03 00:53:06.100945 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:53:06.100953 | orchestrator | 2025-05-03 00:53:06.100961 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-03 00:53:06.100969 | orchestrator | Saturday 03 May 2025 00:52:01 +0000 (0:00:01.942) 0:06:13.055 ********** 2025-05-03 00:53:06.100977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.100986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.100999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.101012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.101021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.101029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-03 00:53:06.101037 | orchestrator | 2025-05-03 00:53:06.101045 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-03 00:53:06.101058 | orchestrator | Saturday 03 May 2025 00:52:10 +0000 (0:00:08.115) 0:06:21.170 ********** 2025-05-03 00:53:06.101066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.101078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.101086 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.101095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.101103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.101112 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.101124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.101139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-03 00:53:06.101148 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.101156 | orchestrator | 2025-05-03 00:53:06.101164 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-03 00:53:06.101172 | orchestrator | Saturday 03 May 2025 00:52:11 +0000 (0:00:01.258) 0:06:22.429 ********** 2025-05-03 00:53:06.101180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101212 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.101221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101259 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.101268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-03 00:53:06.101300 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.101308 | orchestrator | 2025-05-03 00:53:06.101316 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-03 00:53:06.101324 | orchestrator | Saturday 03 May 2025 00:52:12 +0000 (0:00:01.443) 0:06:23.872 ********** 2025-05-03 00:53:06.101332 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.101340 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.101349 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.101357 | orchestrator | 2025-05-03 00:53:06.101365 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-03 00:53:06.101376 | orchestrator | Saturday 03 May 2025 00:52:14 +0000 (0:00:01.495) 0:06:25.368 ********** 2025-05-03 00:53:06.101384 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.101392 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.101400 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.101419 | orchestrator | 2025-05-03 00:53:06.101428 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-03 00:53:06.101436 | orchestrator | Saturday 03 May 2025 00:52:16 +0000 (0:00:02.546) 0:06:27.915 ********** 2025-05-03 00:53:06.101444 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.101452 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.101463 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.101471 | orchestrator | 2025-05-03 00:53:06.101480 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-03 00:53:06.101487 | orchestrator | Saturday 03 May 2025 00:52:17 +0000 (0:00:00.559) 0:06:28.475 ********** 2025-05-03 00:53:06.101495 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.101503 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.101511 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.101519 | orchestrator | 2025-05-03 00:53:06.101527 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-03 00:53:06.101535 | orchestrator | Saturday 03 May 2025 00:52:17 +0000 (0:00:00.309) 0:06:28.784 ********** 2025-05-03 00:53:06.101543 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.101551 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.101559 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.101567 | orchestrator | 2025-05-03 00:53:06.101575 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-03 00:53:06.101582 | orchestrator | Saturday 03 May 2025 00:52:18 +0000 (0:00:00.610) 0:06:29.394 ********** 2025-05-03 00:53:06.101590 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.101598 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.101611 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.101619 | orchestrator | 2025-05-03 00:53:06.101626 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-03 00:53:06.101634 | orchestrator | Saturday 03 May 2025 00:52:18 +0000 (0:00:00.539) 0:06:29.933 ********** 2025-05-03 00:53:06.101642 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.101650 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.101658 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.101666 | orchestrator | 2025-05-03 00:53:06.101674 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-03 00:53:06.101682 | orchestrator | Saturday 03 May 2025 00:52:19 +0000 (0:00:00.301) 0:06:30.234 ********** 2025-05-03 00:53:06.101690 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.101698 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.101706 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.101714 | orchestrator | 2025-05-03 00:53:06.101722 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-03 00:53:06.101730 | orchestrator | Saturday 03 May 2025 00:52:20 +0000 (0:00:01.042) 0:06:31.277 ********** 2025-05-03 00:53:06.101737 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.101746 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.101754 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.101762 | orchestrator | 2025-05-03 00:53:06.101770 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-03 00:53:06.101778 | orchestrator | Saturday 03 May 2025 00:52:21 +0000 (0:00:00.938) 0:06:32.216 ********** 2025-05-03 00:53:06.101786 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.101798 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.101807 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.101815 | orchestrator | 2025-05-03 00:53:06.101823 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-03 00:53:06.101831 | orchestrator | Saturday 03 May 2025 00:52:21 +0000 (0:00:00.340) 0:06:32.556 ********** 2025-05-03 00:53:06.101839 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.101847 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.101855 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.101863 | orchestrator | 2025-05-03 00:53:06.101871 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-03 00:53:06.101879 | orchestrator | Saturday 03 May 2025 00:52:22 +0000 (0:00:01.249) 0:06:33.806 ********** 2025-05-03 00:53:06.101887 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.101894 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.101902 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.101910 | orchestrator | 2025-05-03 00:53:06.101918 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-03 00:53:06.101926 | orchestrator | Saturday 03 May 2025 00:52:23 +0000 (0:00:01.286) 0:06:35.093 ********** 2025-05-03 00:53:06.101934 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.101942 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.101949 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.101957 | orchestrator | 2025-05-03 00:53:06.101965 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-03 00:53:06.101973 | orchestrator | Saturday 03 May 2025 00:52:25 +0000 (0:00:01.195) 0:06:36.288 ********** 2025-05-03 00:53:06.101981 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.101989 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.101997 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.102005 | orchestrator | 2025-05-03 00:53:06.102013 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-03 00:53:06.102060 | orchestrator | Saturday 03 May 2025 00:52:35 +0000 (0:00:10.063) 0:06:46.352 ********** 2025-05-03 00:53:06.102068 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.102076 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.102084 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.102092 | orchestrator | 2025-05-03 00:53:06.102100 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-03 00:53:06.102113 | orchestrator | Saturday 03 May 2025 00:52:36 +0000 (0:00:01.051) 0:06:47.403 ********** 2025-05-03 00:53:06.102121 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.102129 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.102137 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.102145 | orchestrator | 2025-05-03 00:53:06.102153 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-03 00:53:06.102161 | orchestrator | Saturday 03 May 2025 00:52:45 +0000 (0:00:09.474) 0:06:56.877 ********** 2025-05-03 00:53:06.102169 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.102177 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.102185 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.102193 | orchestrator | 2025-05-03 00:53:06.102201 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-03 00:53:06.102213 | orchestrator | Saturday 03 May 2025 00:52:47 +0000 (0:00:01.741) 0:06:58.619 ********** 2025-05-03 00:53:06.102221 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:53:06.102230 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:53:06.102238 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:53:06.102246 | orchestrator | 2025-05-03 00:53:06.102257 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-03 00:53:06.102266 | orchestrator | Saturday 03 May 2025 00:52:57 +0000 (0:00:10.395) 0:07:09.015 ********** 2025-05-03 00:53:06.102274 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.102282 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.102289 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.102297 | orchestrator | 2025-05-03 00:53:06.102306 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-03 00:53:06.102313 | orchestrator | Saturday 03 May 2025 00:52:58 +0000 (0:00:00.668) 0:07:09.683 ********** 2025-05-03 00:53:06.102321 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.102330 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.102338 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.102346 | orchestrator | 2025-05-03 00:53:06.102354 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-03 00:53:06.102362 | orchestrator | Saturday 03 May 2025 00:52:59 +0000 (0:00:00.663) 0:07:10.347 ********** 2025-05-03 00:53:06.102369 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.102377 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.102385 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.102393 | orchestrator | 2025-05-03 00:53:06.102401 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-03 00:53:06.102447 | orchestrator | Saturday 03 May 2025 00:52:59 +0000 (0:00:00.366) 0:07:10.714 ********** 2025-05-03 00:53:06.102457 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.102465 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.102473 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.102481 | orchestrator | 2025-05-03 00:53:06.102489 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-03 00:53:06.102497 | orchestrator | Saturday 03 May 2025 00:53:00 +0000 (0:00:00.636) 0:07:11.350 ********** 2025-05-03 00:53:06.102505 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.102513 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.102521 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.102529 | orchestrator | 2025-05-03 00:53:06.102536 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-03 00:53:06.102544 | orchestrator | Saturday 03 May 2025 00:53:00 +0000 (0:00:00.622) 0:07:11.972 ********** 2025-05-03 00:53:06.102552 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:53:06.102560 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:53:06.102568 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:53:06.102577 | orchestrator | 2025-05-03 00:53:06.102585 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-03 00:53:06.102601 | orchestrator | Saturday 03 May 2025 00:53:01 +0000 (0:00:00.601) 0:07:12.573 ********** 2025-05-03 00:53:06.102609 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.102617 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.102625 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.102637 | orchestrator | 2025-05-03 00:53:06.102645 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-03 00:53:06.102653 | orchestrator | Saturday 03 May 2025 00:53:02 +0000 (0:00:00.846) 0:07:13.420 ********** 2025-05-03 00:53:06.102661 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:53:06.102669 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:53:06.102677 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:53:06.102689 | orchestrator | 2025-05-03 00:53:06.102697 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:53:06.102706 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-03 00:53:06.102715 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-03 00:53:06.102722 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-03 00:53:06.102729 | orchestrator | 2025-05-03 00:53:06.102736 | orchestrator | 2025-05-03 00:53:06.102743 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:53:06.102750 | orchestrator | Saturday 03 May 2025 00:53:03 +0000 (0:00:01.134) 0:07:14.554 ********** 2025-05-03 00:53:06.102757 | orchestrator | =============================================================================== 2025-05-03 00:53:06.102764 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.40s 2025-05-03 00:53:06.102771 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.06s 2025-05-03 00:53:06.102778 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.47s 2025-05-03 00:53:06.102785 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.12s 2025-05-03 00:53:06.102792 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.85s 2025-05-03 00:53:06.102799 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.37s 2025-05-03 00:53:06.102806 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.29s 2025-05-03 00:53:06.102813 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.76s 2025-05-03 00:53:06.102820 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.36s 2025-05-03 00:53:06.102827 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.08s 2025-05-03 00:53:06.102834 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.97s 2025-05-03 00:53:06.102844 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.85s 2025-05-03 00:53:06.102852 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.80s 2025-05-03 00:53:06.102859 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.79s 2025-05-03 00:53:06.102869 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.77s 2025-05-03 00:53:09.121057 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.69s 2025-05-03 00:53:09.121175 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.50s 2025-05-03 00:53:09.121194 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.41s 2025-05-03 00:53:09.121209 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.41s 2025-05-03 00:53:09.121224 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.33s 2025-05-03 00:53:09.121239 | orchestrator | 2025-05-03 00:53:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:09.121283 | orchestrator | 2025-05-03 00:53:06 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:09.121299 | orchestrator | 2025-05-03 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:09.121330 | orchestrator | 2025-05-03 00:53:09 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:09.124223 | orchestrator | 2025-05-03 00:53:09 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:09.124554 | orchestrator | 2025-05-03 00:53:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:09.125068 | orchestrator | 2025-05-03 00:53:09 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:12.161586 | orchestrator | 2025-05-03 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:12.161722 | orchestrator | 2025-05-03 00:53:12 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:12.171658 | orchestrator | 2025-05-03 00:53:12 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:12.171724 | orchestrator | 2025-05-03 00:53:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:12.172492 | orchestrator | 2025-05-03 00:53:12 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:15.223721 | orchestrator | 2025-05-03 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:15.223893 | orchestrator | 2025-05-03 00:53:15 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:18.253875 | orchestrator | 2025-05-03 00:53:15 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:18.254000 | orchestrator | 2025-05-03 00:53:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:18.254009 | orchestrator | 2025-05-03 00:53:15 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:18.254063 | orchestrator | 2025-05-03 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:18.254085 | orchestrator | 2025-05-03 00:53:18 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:18.256834 | orchestrator | 2025-05-03 00:53:18 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:18.259305 | orchestrator | 2025-05-03 00:53:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:18.261812 | orchestrator | 2025-05-03 00:53:18 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:21.302283 | orchestrator | 2025-05-03 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:21.302464 | orchestrator | 2025-05-03 00:53:21 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:21.302654 | orchestrator | 2025-05-03 00:53:21 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:21.303422 | orchestrator | 2025-05-03 00:53:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:21.304071 | orchestrator | 2025-05-03 00:53:21 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:24.331592 | orchestrator | 2025-05-03 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:24.331707 | orchestrator | 2025-05-03 00:53:24 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:27.355680 | orchestrator | 2025-05-03 00:53:24 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:27.355785 | orchestrator | 2025-05-03 00:53:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:27.355804 | orchestrator | 2025-05-03 00:53:24 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:27.355819 | orchestrator | 2025-05-03 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:27.355850 | orchestrator | 2025-05-03 00:53:27 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:27.356073 | orchestrator | 2025-05-03 00:53:27 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:27.356105 | orchestrator | 2025-05-03 00:53:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:27.356787 | orchestrator | 2025-05-03 00:53:27 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:30.382159 | orchestrator | 2025-05-03 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:30.382297 | orchestrator | 2025-05-03 00:53:30 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:30.383171 | orchestrator | 2025-05-03 00:53:30 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:30.383282 | orchestrator | 2025-05-03 00:53:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:30.383954 | orchestrator | 2025-05-03 00:53:30 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:33.436506 | orchestrator | 2025-05-03 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:33.436628 | orchestrator | 2025-05-03 00:53:33 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:33.438618 | orchestrator | 2025-05-03 00:53:33 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:33.440603 | orchestrator | 2025-05-03 00:53:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:33.442236 | orchestrator | 2025-05-03 00:53:33 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:33.442532 | orchestrator | 2025-05-03 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:36.504094 | orchestrator | 2025-05-03 00:53:36 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:36.506151 | orchestrator | 2025-05-03 00:53:36 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:36.506500 | orchestrator | 2025-05-03 00:53:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:36.508354 | orchestrator | 2025-05-03 00:53:36 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:39.568823 | orchestrator | 2025-05-03 00:53:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:39.568960 | orchestrator | 2025-05-03 00:53:39 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:39.570555 | orchestrator | 2025-05-03 00:53:39 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:39.572160 | orchestrator | 2025-05-03 00:53:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:39.573864 | orchestrator | 2025-05-03 00:53:39 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:42.647956 | orchestrator | 2025-05-03 00:53:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:42.648173 | orchestrator | 2025-05-03 00:53:42 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:42.650854 | orchestrator | 2025-05-03 00:53:42 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:42.653156 | orchestrator | 2025-05-03 00:53:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:42.655310 | orchestrator | 2025-05-03 00:53:42 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:42.655478 | orchestrator | 2025-05-03 00:53:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:45.714869 | orchestrator | 2025-05-03 00:53:45 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:45.715296 | orchestrator | 2025-05-03 00:53:45 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:45.716166 | orchestrator | 2025-05-03 00:53:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:45.716962 | orchestrator | 2025-05-03 00:53:45 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:45.717212 | orchestrator | 2025-05-03 00:53:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:48.771473 | orchestrator | 2025-05-03 00:53:48 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:48.772807 | orchestrator | 2025-05-03 00:53:48 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:48.774586 | orchestrator | 2025-05-03 00:53:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:48.776108 | orchestrator | 2025-05-03 00:53:48 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:48.776246 | orchestrator | 2025-05-03 00:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:51.828761 | orchestrator | 2025-05-03 00:53:51 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:51.830614 | orchestrator | 2025-05-03 00:53:51 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:51.833095 | orchestrator | 2025-05-03 00:53:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:51.835464 | orchestrator | 2025-05-03 00:53:51 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:51.835977 | orchestrator | 2025-05-03 00:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:54.897437 | orchestrator | 2025-05-03 00:53:54 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:54.899234 | orchestrator | 2025-05-03 00:53:54 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:54.900961 | orchestrator | 2025-05-03 00:53:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:54.902687 | orchestrator | 2025-05-03 00:53:54 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:57.947051 | orchestrator | 2025-05-03 00:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:53:57.947223 | orchestrator | 2025-05-03 00:53:57 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:53:57.948568 | orchestrator | 2025-05-03 00:53:57 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:53:57.950413 | orchestrator | 2025-05-03 00:53:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:53:57.952027 | orchestrator | 2025-05-03 00:53:57 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:53:57.952289 | orchestrator | 2025-05-03 00:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:00.992507 | orchestrator | 2025-05-03 00:54:00 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:00.993654 | orchestrator | 2025-05-03 00:54:00 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:00.995046 | orchestrator | 2025-05-03 00:54:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:00.996931 | orchestrator | 2025-05-03 00:54:00 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:04.042776 | orchestrator | 2025-05-03 00:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:04.042999 | orchestrator | 2025-05-03 00:54:04 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:04.044956 | orchestrator | 2025-05-03 00:54:04 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:04.044999 | orchestrator | 2025-05-03 00:54:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:04.045023 | orchestrator | 2025-05-03 00:54:04 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:07.117927 | orchestrator | 2025-05-03 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:07.118135 | orchestrator | 2025-05-03 00:54:07 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:07.120118 | orchestrator | 2025-05-03 00:54:07 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:07.122423 | orchestrator | 2025-05-03 00:54:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:07.124301 | orchestrator | 2025-05-03 00:54:07 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:07.124857 | orchestrator | 2025-05-03 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:10.169101 | orchestrator | 2025-05-03 00:54:10 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:10.170668 | orchestrator | 2025-05-03 00:54:10 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:10.174897 | orchestrator | 2025-05-03 00:54:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:13.210867 | orchestrator | 2025-05-03 00:54:10 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:13.210994 | orchestrator | 2025-05-03 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:13.211031 | orchestrator | 2025-05-03 00:54:13 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:13.211156 | orchestrator | 2025-05-03 00:54:13 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:13.211930 | orchestrator | 2025-05-03 00:54:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:13.214532 | orchestrator | 2025-05-03 00:54:13 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:16.255684 | orchestrator | 2025-05-03 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:16.255855 | orchestrator | 2025-05-03 00:54:16 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:16.256322 | orchestrator | 2025-05-03 00:54:16 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:16.265235 | orchestrator | 2025-05-03 00:54:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:16.266872 | orchestrator | 2025-05-03 00:54:16 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:19.332451 | orchestrator | 2025-05-03 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:19.332607 | orchestrator | 2025-05-03 00:54:19 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:19.333566 | orchestrator | 2025-05-03 00:54:19 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:19.333625 | orchestrator | 2025-05-03 00:54:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:19.335476 | orchestrator | 2025-05-03 00:54:19 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:22.399219 | orchestrator | 2025-05-03 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:22.399431 | orchestrator | 2025-05-03 00:54:22 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:22.400731 | orchestrator | 2025-05-03 00:54:22 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:22.401466 | orchestrator | 2025-05-03 00:54:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:22.402704 | orchestrator | 2025-05-03 00:54:22 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:25.453738 | orchestrator | 2025-05-03 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:25.453881 | orchestrator | 2025-05-03 00:54:25 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:25.455277 | orchestrator | 2025-05-03 00:54:25 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:25.455320 | orchestrator | 2025-05-03 00:54:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:25.456165 | orchestrator | 2025-05-03 00:54:25 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:28.486508 | orchestrator | 2025-05-03 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:28.486627 | orchestrator | 2025-05-03 00:54:28 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:28.487915 | orchestrator | 2025-05-03 00:54:28 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:28.489871 | orchestrator | 2025-05-03 00:54:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:28.491002 | orchestrator | 2025-05-03 00:54:28 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:28.491197 | orchestrator | 2025-05-03 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:31.539104 | orchestrator | 2025-05-03 00:54:31 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:31.541024 | orchestrator | 2025-05-03 00:54:31 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:31.543107 | orchestrator | 2025-05-03 00:54:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:31.545007 | orchestrator | 2025-05-03 00:54:31 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:34.592068 | orchestrator | 2025-05-03 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:34.592233 | orchestrator | 2025-05-03 00:54:34 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:34.594520 | orchestrator | 2025-05-03 00:54:34 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:34.595718 | orchestrator | 2025-05-03 00:54:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:34.596699 | orchestrator | 2025-05-03 00:54:34 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:37.652956 | orchestrator | 2025-05-03 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:37.653108 | orchestrator | 2025-05-03 00:54:37 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:37.653395 | orchestrator | 2025-05-03 00:54:37 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:37.656172 | orchestrator | 2025-05-03 00:54:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:37.658888 | orchestrator | 2025-05-03 00:54:37 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:40.706896 | orchestrator | 2025-05-03 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:40.707010 | orchestrator | 2025-05-03 00:54:40 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:40.708615 | orchestrator | 2025-05-03 00:54:40 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:40.710631 | orchestrator | 2025-05-03 00:54:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:40.712506 | orchestrator | 2025-05-03 00:54:40 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:40.712741 | orchestrator | 2025-05-03 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:43.771385 | orchestrator | 2025-05-03 00:54:43 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:43.773601 | orchestrator | 2025-05-03 00:54:43 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:43.776224 | orchestrator | 2025-05-03 00:54:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:43.777704 | orchestrator | 2025-05-03 00:54:43 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:46.832527 | orchestrator | 2025-05-03 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:46.832662 | orchestrator | 2025-05-03 00:54:46 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:46.834761 | orchestrator | 2025-05-03 00:54:46 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:46.836375 | orchestrator | 2025-05-03 00:54:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:46.837705 | orchestrator | 2025-05-03 00:54:46 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:49.881208 | orchestrator | 2025-05-03 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:49.881412 | orchestrator | 2025-05-03 00:54:49 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:49.883278 | orchestrator | 2025-05-03 00:54:49 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:49.885088 | orchestrator | 2025-05-03 00:54:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:49.887113 | orchestrator | 2025-05-03 00:54:49 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:52.936822 | orchestrator | 2025-05-03 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:52.936991 | orchestrator | 2025-05-03 00:54:52 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:52.938384 | orchestrator | 2025-05-03 00:54:52 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:52.939572 | orchestrator | 2025-05-03 00:54:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:52.942789 | orchestrator | 2025-05-03 00:54:52 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:55.999133 | orchestrator | 2025-05-03 00:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:55.999273 | orchestrator | 2025-05-03 00:54:55 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:56.002365 | orchestrator | 2025-05-03 00:54:55 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:56.004119 | orchestrator | 2025-05-03 00:54:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:56.006135 | orchestrator | 2025-05-03 00:54:56 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:54:56.006439 | orchestrator | 2025-05-03 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:54:59.059630 | orchestrator | 2025-05-03 00:54:59 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:54:59.060979 | orchestrator | 2025-05-03 00:54:59 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:54:59.063004 | orchestrator | 2025-05-03 00:54:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:54:59.065007 | orchestrator | 2025-05-03 00:54:59 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:02.118669 | orchestrator | 2025-05-03 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:02.118810 | orchestrator | 2025-05-03 00:55:02 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:55:02.121266 | orchestrator | 2025-05-03 00:55:02 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:02.123692 | orchestrator | 2025-05-03 00:55:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:02.125761 | orchestrator | 2025-05-03 00:55:02 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:05.178455 | orchestrator | 2025-05-03 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:05.178639 | orchestrator | 2025-05-03 00:55:05 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:55:05.179658 | orchestrator | 2025-05-03 00:55:05 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:05.181806 | orchestrator | 2025-05-03 00:55:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:05.183373 | orchestrator | 2025-05-03 00:55:05 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:05.183889 | orchestrator | 2025-05-03 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:08.235634 | orchestrator | 2025-05-03 00:55:08 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:55:08.236489 | orchestrator | 2025-05-03 00:55:08 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:08.237774 | orchestrator | 2025-05-03 00:55:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:08.239102 | orchestrator | 2025-05-03 00:55:08 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:08.239229 | orchestrator | 2025-05-03 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:11.281684 | orchestrator | 2025-05-03 00:55:11 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:55:11.282527 | orchestrator | 2025-05-03 00:55:11 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:11.284816 | orchestrator | 2025-05-03 00:55:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:11.287293 | orchestrator | 2025-05-03 00:55:11 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:14.340545 | orchestrator | 2025-05-03 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:14.340720 | orchestrator | 2025-05-03 00:55:14 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:55:14.341364 | orchestrator | 2025-05-03 00:55:14 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:14.342804 | orchestrator | 2025-05-03 00:55:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:14.344435 | orchestrator | 2025-05-03 00:55:14 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:17.406286 | orchestrator | 2025-05-03 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:17.406451 | orchestrator | 2025-05-03 00:55:17 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:55:17.408259 | orchestrator | 2025-05-03 00:55:17 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:17.409627 | orchestrator | 2025-05-03 00:55:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:17.410901 | orchestrator | 2025-05-03 00:55:17 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:17.411870 | orchestrator | 2025-05-03 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:20.464472 | orchestrator | 2025-05-03 00:55:20 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:55:20.465269 | orchestrator | 2025-05-03 00:55:20 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:20.467824 | orchestrator | 2025-05-03 00:55:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:20.468625 | orchestrator | 2025-05-03 00:55:20 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:20.468810 | orchestrator | 2025-05-03 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:23.517435 | orchestrator | 2025-05-03 00:55:23 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state STARTED 2025-05-03 00:55:23.521045 | orchestrator | 2025-05-03 00:55:23 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:23.522523 | orchestrator | 2025-05-03 00:55:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:23.522581 | orchestrator | 2025-05-03 00:55:23 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:26.570874 | orchestrator | 2025-05-03 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:26.571084 | orchestrator | 2025-05-03 00:55:26.571109 | orchestrator | 2025-05-03 00:55:26.571124 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:55:26.571138 | orchestrator | 2025-05-03 00:55:26.571153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:55:26.571167 | orchestrator | Saturday 03 May 2025 00:53:07 +0000 (0:00:00.248) 0:00:00.248 ********** 2025-05-03 00:55:26.571181 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:55:26.571197 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:55:26.571211 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:55:26.571225 | orchestrator | 2025-05-03 00:55:26.571240 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:55:26.571254 | orchestrator | Saturday 03 May 2025 00:53:07 +0000 (0:00:00.346) 0:00:00.594 ********** 2025-05-03 00:55:26.571268 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-03 00:55:26.571283 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-03 00:55:26.571329 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-03 00:55:26.571345 | orchestrator | 2025-05-03 00:55:26.571359 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-03 00:55:26.571374 | orchestrator | 2025-05-03 00:55:26.571388 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-03 00:55:26.571402 | orchestrator | Saturday 03 May 2025 00:53:07 +0000 (0:00:00.228) 0:00:00.823 ********** 2025-05-03 00:55:26.571416 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:55:26.571433 | orchestrator | 2025-05-03 00:55:26.571449 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-03 00:55:26.571465 | orchestrator | Saturday 03 May 2025 00:53:08 +0000 (0:00:00.501) 0:00:01.324 ********** 2025-05-03 00:55:26.571481 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-03 00:55:26.571498 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-03 00:55:26.571514 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-03 00:55:26.571529 | orchestrator | 2025-05-03 00:55:26.571545 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-03 00:55:26.571562 | orchestrator | Saturday 03 May 2025 00:53:08 +0000 (0:00:00.653) 0:00:01.977 ********** 2025-05-03 00:55:26.571582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.571600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.571638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.571655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.571672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.571688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.571709 | orchestrator | 2025-05-03 00:55:26.571724 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-03 00:55:26.571738 | orchestrator | Saturday 03 May 2025 00:53:10 +0000 (0:00:01.245) 0:00:03.223 ********** 2025-05-03 00:55:26.571752 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:55:26.571766 | orchestrator | 2025-05-03 00:55:26.571780 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-03 00:55:26.571794 | orchestrator | Saturday 03 May 2025 00:53:10 +0000 (0:00:00.586) 0:00:03.810 ********** 2025-05-03 00:55:26.571819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.571897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.571915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.571931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.571964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.571980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.571995 | orchestrator | 2025-05-03 00:55:26.572010 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-03 00:55:26.572024 | orchestrator | Saturday 03 May 2025 00:53:13 +0000 (0:00:02.781) 0:00:06.591 ********** 2025-05-03 00:55:26.572039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:55:26.572054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:55:26.572075 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:55:26.572098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:55:26.572113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:55:26.572128 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:55:26.572143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:55:26.572157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:55:26.572179 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:55:26.572194 | orchestrator | 2025-05-03 00:55:26.572208 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-03 00:55:26.572237 | orchestrator | Saturday 03 May 2025 00:53:14 +0000 (0:00:00.941) 0:00:07.533 ********** 2025-05-03 00:55:26.572259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:55:26.572275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:55:26.572290 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:55:26.572360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:55:26.572383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:55:26.572399 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:55:26.572420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-03 00:55:26.572436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-03 00:55:26.572452 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:55:26.572466 | orchestrator | 2025-05-03 00:55:26.572480 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-03 00:55:26.572494 | orchestrator | Saturday 03 May 2025 00:53:15 +0000 (0:00:01.383) 0:00:08.916 ********** 2025-05-03 00:55:26.572508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.572530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.572544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.572567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.572583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.572605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.572620 | orchestrator | 2025-05-03 00:55:26.572635 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-03 00:55:26.572649 | orchestrator | Saturday 03 May 2025 00:53:18 +0000 (0:00:02.841) 0:00:11.757 ********** 2025-05-03 00:55:26.572699 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:55:26.572716 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:55:26.572730 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:55:26.572744 | orchestrator | 2025-05-03 00:55:26.572758 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-03 00:55:26.572772 | orchestrator | Saturday 03 May 2025 00:53:22 +0000 (0:00:03.602) 0:00:15.360 ********** 2025-05-03 00:55:26.572786 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:55:26.572800 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:55:26.572814 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:55:26.572828 | orchestrator | 2025-05-03 00:55:26.572842 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-03 00:55:26.572856 | orchestrator | Saturday 03 May 2025 00:53:23 +0000 (0:00:01.584) 0:00:16.945 ********** 2025-05-03 00:55:26.572877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-05-03 00:55:26 | INFO  | Task d56f1863-8b73-4dc3-adb5-7df9e68652b3 is in state SUCCESS 2025-05-03 00:55:26.573054 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.573153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.573185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-03 00:55:26.573202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.573226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.573241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-03 00:55:26.573263 | orchestrator | 2025-05-03 00:55:26.573277 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-03 00:55:26.573291 | orchestrator | Saturday 03 May 2025 00:53:26 +0000 (0:00:02.367) 0:00:19.312 ********** 2025-05-03 00:55:26.573389 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:55:26.573406 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:55:26.573420 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:55:26.573434 | orchestrator | 2025-05-03 00:55:26.573448 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-03 00:55:26.573462 | orchestrator | Saturday 03 May 2025 00:53:26 +0000 (0:00:00.313) 0:00:19.626 ********** 2025-05-03 00:55:26.573476 | orchestrator | 2025-05-03 00:55:26.573490 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-03 00:55:26.573504 | orchestrator | Saturday 03 May 2025 00:53:26 +0000 (0:00:00.177) 0:00:19.804 ********** 2025-05-03 00:55:26.573518 | orchestrator | 2025-05-03 00:55:26.573532 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-03 00:55:26.573546 | orchestrator | Saturday 03 May 2025 00:53:26 +0000 (0:00:00.050) 0:00:19.854 ********** 2025-05-03 00:55:26.573560 | orchestrator | 2025-05-03 00:55:26.573573 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-03 00:55:26.573587 | orchestrator | Saturday 03 May 2025 00:53:26 +0000 (0:00:00.084) 0:00:19.939 ********** 2025-05-03 00:55:26.573601 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:55:26.573615 | orchestrator | 2025-05-03 00:55:26.573629 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-03 00:55:26.573643 | orchestrator | Saturday 03 May 2025 00:53:27 +0000 (0:00:00.240) 0:00:20.179 ********** 2025-05-03 00:55:26.573657 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:55:26.573673 | orchestrator | 2025-05-03 00:55:26.573689 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-03 00:55:26.573705 | orchestrator | Saturday 03 May 2025 00:53:27 +0000 (0:00:00.465) 0:00:20.645 ********** 2025-05-03 00:55:26.573721 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:55:26.573738 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:55:26.573753 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:55:26.573770 | orchestrator | 2025-05-03 00:55:26.573785 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-03 00:55:26.573801 | orchestrator | Saturday 03 May 2025 00:54:08 +0000 (0:00:41.341) 0:01:01.986 ********** 2025-05-03 00:55:26.573817 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:55:26.573833 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:55:26.573850 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:55:26.573866 | orchestrator | 2025-05-03 00:55:26.573881 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-03 00:55:26.573898 | orchestrator | Saturday 03 May 2025 00:55:12 +0000 (0:01:03.767) 0:02:05.754 ********** 2025-05-03 00:55:26.573913 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:55:26.573929 | orchestrator | 2025-05-03 00:55:26.573945 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-03 00:55:26.573962 | orchestrator | Saturday 03 May 2025 00:55:13 +0000 (0:00:00.718) 0:02:06.472 ********** 2025-05-03 00:55:26.573978 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:55:26.573993 | orchestrator | 2025-05-03 00:55:26.574007 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-03 00:55:26.574071 | orchestrator | Saturday 03 May 2025 00:55:16 +0000 (0:00:02.711) 0:02:09.184 ********** 2025-05-03 00:55:26.574086 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:55:26.574100 | orchestrator | 2025-05-03 00:55:26.574123 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-03 00:55:26.574194 | orchestrator | Saturday 03 May 2025 00:55:18 +0000 (0:00:02.488) 0:02:11.673 ********** 2025-05-03 00:55:26.574211 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:55:26.574225 | orchestrator | 2025-05-03 00:55:26.574239 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-03 00:55:26.574254 | orchestrator | Saturday 03 May 2025 00:55:21 +0000 (0:00:02.916) 0:02:14.589 ********** 2025-05-03 00:55:26.574268 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:55:26.574282 | orchestrator | 2025-05-03 00:55:26.574331 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:55:26.575844 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 00:55:26.575874 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:55:26.575887 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-03 00:55:26.575899 | orchestrator | 2025-05-03 00:55:26.575911 | orchestrator | 2025-05-03 00:55:26.575923 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:55:26.575936 | orchestrator | Saturday 03 May 2025 00:55:24 +0000 (0:00:03.124) 0:02:17.713 ********** 2025-05-03 00:55:26.575948 | orchestrator | =============================================================================== 2025-05-03 00:55:26.575961 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 63.77s 2025-05-03 00:55:26.575973 | orchestrator | opensearch : Restart opensearch container ------------------------------ 41.34s 2025-05-03 00:55:26.575985 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.60s 2025-05-03 00:55:26.575998 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.12s 2025-05-03 00:55:26.576010 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.92s 2025-05-03 00:55:26.576022 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.84s 2025-05-03 00:55:26.576034 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.78s 2025-05-03 00:55:26.576047 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.71s 2025-05-03 00:55:26.576059 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.49s 2025-05-03 00:55:26.576071 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.37s 2025-05-03 00:55:26.576083 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.58s 2025-05-03 00:55:26.576096 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.38s 2025-05-03 00:55:26.576108 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.25s 2025-05-03 00:55:26.576120 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.94s 2025-05-03 00:55:26.576133 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.72s 2025-05-03 00:55:26.576145 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2025-05-03 00:55:26.576157 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2025-05-03 00:55:26.576170 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-05-03 00:55:26.576182 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.47s 2025-05-03 00:55:26.576194 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-05-03 00:55:26.576206 | orchestrator | 2025-05-03 00:55:26 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:26.576219 | orchestrator | 2025-05-03 00:55:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:26.576254 | orchestrator | 2025-05-03 00:55:26 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:29.627180 | orchestrator | 2025-05-03 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:29.627368 | orchestrator | 2025-05-03 00:55:29 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:29.629103 | orchestrator | 2025-05-03 00:55:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:29.630971 | orchestrator | 2025-05-03 00:55:29 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:29.631460 | orchestrator | 2025-05-03 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:32.682605 | orchestrator | 2025-05-03 00:55:32 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:32.684658 | orchestrator | 2025-05-03 00:55:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:35.730764 | orchestrator | 2025-05-03 00:55:32 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:35.730914 | orchestrator | 2025-05-03 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:35.730951 | orchestrator | 2025-05-03 00:55:35 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:35.733186 | orchestrator | 2025-05-03 00:55:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:35.735372 | orchestrator | 2025-05-03 00:55:35 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:35.735729 | orchestrator | 2025-05-03 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:38.791224 | orchestrator | 2025-05-03 00:55:38 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:38.792839 | orchestrator | 2025-05-03 00:55:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:38.794931 | orchestrator | 2025-05-03 00:55:38 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:38.795054 | orchestrator | 2025-05-03 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:41.844845 | orchestrator | 2025-05-03 00:55:41 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:41.845508 | orchestrator | 2025-05-03 00:55:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:41.846583 | orchestrator | 2025-05-03 00:55:41 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:44.893464 | orchestrator | 2025-05-03 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:44.893630 | orchestrator | 2025-05-03 00:55:44 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:44.894473 | orchestrator | 2025-05-03 00:55:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:44.896171 | orchestrator | 2025-05-03 00:55:44 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:47.943357 | orchestrator | 2025-05-03 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:47.943506 | orchestrator | 2025-05-03 00:55:47 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:47.945462 | orchestrator | 2025-05-03 00:55:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:47.947212 | orchestrator | 2025-05-03 00:55:47 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:50.992555 | orchestrator | 2025-05-03 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:50.992683 | orchestrator | 2025-05-03 00:55:50 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:50.993845 | orchestrator | 2025-05-03 00:55:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:50.995535 | orchestrator | 2025-05-03 00:55:50 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:54.046258 | orchestrator | 2025-05-03 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:54.046417 | orchestrator | 2025-05-03 00:55:54 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:54.047951 | orchestrator | 2025-05-03 00:55:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:54.049383 | orchestrator | 2025-05-03 00:55:54 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:55:54.049677 | orchestrator | 2025-05-03 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:55:57.110684 | orchestrator | 2025-05-03 00:55:57 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:55:57.118868 | orchestrator | 2025-05-03 00:55:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:55:57.122725 | orchestrator | 2025-05-03 00:55:57 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:00.176721 | orchestrator | 2025-05-03 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:00.176860 | orchestrator | 2025-05-03 00:56:00 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:56:00.178477 | orchestrator | 2025-05-03 00:56:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:00.180799 | orchestrator | 2025-05-03 00:56:00 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:00.181659 | orchestrator | 2025-05-03 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:03.239572 | orchestrator | 2025-05-03 00:56:03 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:56:03.241177 | orchestrator | 2025-05-03 00:56:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:03.242512 | orchestrator | 2025-05-03 00:56:03 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:06.298425 | orchestrator | 2025-05-03 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:06.298595 | orchestrator | 2025-05-03 00:56:06 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:56:06.300202 | orchestrator | 2025-05-03 00:56:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:06.302422 | orchestrator | 2025-05-03 00:56:06 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:09.361943 | orchestrator | 2025-05-03 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:09.362147 | orchestrator | 2025-05-03 00:56:09 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:56:09.363440 | orchestrator | 2025-05-03 00:56:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:09.364995 | orchestrator | 2025-05-03 00:56:09 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:12.413552 | orchestrator | 2025-05-03 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:12.413740 | orchestrator | 2025-05-03 00:56:12 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:56:12.414981 | orchestrator | 2025-05-03 00:56:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:12.417330 | orchestrator | 2025-05-03 00:56:12 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:12.417947 | orchestrator | 2025-05-03 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:15.470208 | orchestrator | 2025-05-03 00:56:15 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:56:15.471582 | orchestrator | 2025-05-03 00:56:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:15.472769 | orchestrator | 2025-05-03 00:56:15 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:18.519545 | orchestrator | 2025-05-03 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:18.519683 | orchestrator | 2025-05-03 00:56:18 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state STARTED 2025-05-03 00:56:18.520550 | orchestrator | 2025-05-03 00:56:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:18.522134 | orchestrator | 2025-05-03 00:56:18 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:21.576793 | orchestrator | 2025-05-03 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:21.576963 | orchestrator | 2025-05-03 00:56:21 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:21.586872 | orchestrator | 2025-05-03 00:56:21.587025 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-03 00:56:21.587059 | orchestrator | 2025-05-03 00:56:21.587591 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-03 00:56:21.587631 | orchestrator | 2025-05-03 00:56:21.587658 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-03 00:56:21.587684 | orchestrator | Saturday 03 May 2025 00:43:27 +0000 (0:00:01.574) 0:00:01.574 ********** 2025-05-03 00:56:21.587711 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.587738 | orchestrator | 2025-05-03 00:56:21.587762 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-03 00:56:21.587809 | orchestrator | Saturday 03 May 2025 00:43:28 +0000 (0:00:01.269) 0:00:02.843 ********** 2025-05-03 00:56:21.587837 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:56:21.587862 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-03 00:56:21.587886 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-03 00:56:21.588344 | orchestrator | 2025-05-03 00:56:21.588366 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-03 00:56:21.588381 | orchestrator | Saturday 03 May 2025 00:43:29 +0000 (0:00:00.789) 0:00:03.632 ********** 2025-05-03 00:56:21.588396 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.588411 | orchestrator | 2025-05-03 00:56:21.588425 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-03 00:56:21.588448 | orchestrator | Saturday 03 May 2025 00:43:30 +0000 (0:00:01.097) 0:00:04.730 ********** 2025-05-03 00:56:21.588463 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.588506 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.588531 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.588556 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.588580 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.588606 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.588633 | orchestrator | 2025-05-03 00:56:21.588658 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-03 00:56:21.588683 | orchestrator | Saturday 03 May 2025 00:43:32 +0000 (0:00:01.285) 0:00:06.016 ********** 2025-05-03 00:56:21.588709 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.588733 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.588758 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.588783 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.588806 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.588830 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.589034 | orchestrator | 2025-05-03 00:56:21.589060 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-03 00:56:21.589076 | orchestrator | Saturday 03 May 2025 00:43:32 +0000 (0:00:00.886) 0:00:06.903 ********** 2025-05-03 00:56:21.589093 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.589109 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.589126 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.589141 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.589156 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.589172 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.589188 | orchestrator | 2025-05-03 00:56:21.589203 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-03 00:56:21.589219 | orchestrator | Saturday 03 May 2025 00:43:34 +0000 (0:00:01.240) 0:00:08.143 ********** 2025-05-03 00:56:21.589234 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.589249 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.589307 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.589340 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.589367 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.589945 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.589983 | orchestrator | 2025-05-03 00:56:21.589998 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-03 00:56:21.590012 | orchestrator | Saturday 03 May 2025 00:43:35 +0000 (0:00:01.411) 0:00:09.554 ********** 2025-05-03 00:56:21.590061 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.590076 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.590090 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.590104 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.590118 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.590132 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.590146 | orchestrator | 2025-05-03 00:56:21.590191 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-03 00:56:21.590207 | orchestrator | Saturday 03 May 2025 00:43:36 +0000 (0:00:01.010) 0:00:10.565 ********** 2025-05-03 00:56:21.590221 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.590235 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.590248 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.590302 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.590317 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.590331 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.590350 | orchestrator | 2025-05-03 00:56:21.590375 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-03 00:56:21.590402 | orchestrator | Saturday 03 May 2025 00:43:37 +0000 (0:00:00.903) 0:00:11.468 ********** 2025-05-03 00:56:21.590430 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.590464 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.590494 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.590521 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.590546 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.590571 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.590595 | orchestrator | 2025-05-03 00:56:21.590620 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-03 00:56:21.590667 | orchestrator | Saturday 03 May 2025 00:43:38 +0000 (0:00:00.743) 0:00:12.212 ********** 2025-05-03 00:56:21.590692 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.590717 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.590743 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.590769 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.590795 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.590820 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.590844 | orchestrator | 2025-05-03 00:56:21.591427 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-03 00:56:21.591490 | orchestrator | Saturday 03 May 2025 00:43:39 +0000 (0:00:00.945) 0:00:13.157 ********** 2025-05-03 00:56:21.591515 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:56:21.591543 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:56:21.591566 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:56:21.591590 | orchestrator | 2025-05-03 00:56:21.591614 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-03 00:56:21.592197 | orchestrator | Saturday 03 May 2025 00:43:40 +0000 (0:00:01.209) 0:00:14.366 ********** 2025-05-03 00:56:21.592542 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.592569 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.592584 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.592846 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.592860 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.592873 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.592885 | orchestrator | 2025-05-03 00:56:21.592898 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-03 00:56:21.592910 | orchestrator | Saturday 03 May 2025 00:43:42 +0000 (0:00:01.745) 0:00:16.112 ********** 2025-05-03 00:56:21.592931 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:56:21.592944 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:56:21.592957 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:56:21.592970 | orchestrator | 2025-05-03 00:56:21.592982 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-03 00:56:21.592997 | orchestrator | Saturday 03 May 2025 00:43:45 +0000 (0:00:03.121) 0:00:19.233 ********** 2025-05-03 00:56:21.593019 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.593042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.593064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.593241 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.593307 | orchestrator | 2025-05-03 00:56:21.593529 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-03 00:56:21.593555 | orchestrator | Saturday 03 May 2025 00:43:45 +0000 (0:00:00.428) 0:00:19.662 ********** 2025-05-03 00:56:21.593569 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.593585 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.593600 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.593622 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.593644 | orchestrator | 2025-05-03 00:56:21.594204 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-03 00:56:21.594239 | orchestrator | Saturday 03 May 2025 00:43:46 +0000 (0:00:00.907) 0:00:20.569 ********** 2025-05-03 00:56:21.594288 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.594313 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.594337 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.594377 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.594568 | orchestrator | 2025-05-03 00:56:21.594587 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-03 00:56:21.594708 | orchestrator | Saturday 03 May 2025 00:43:46 +0000 (0:00:00.213) 0:00:20.783 ********** 2025-05-03 00:56:21.594734 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-03 00:43:42.801230', 'end': '2025-05-03 00:43:43.035773', 'delta': '0:00:00.234543', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.594752 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-03 00:43:43.709096', 'end': '2025-05-03 00:43:44.077560', 'delta': '0:00:00.368464', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.594767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-03 00:43:44.710599', 'end': '2025-05-03 00:43:44.992295', 'delta': '0:00:00.281696', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-03 00:56:21.594794 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.594808 | orchestrator | 2025-05-03 00:56:21.594821 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-03 00:56:21.594834 | orchestrator | Saturday 03 May 2025 00:43:47 +0000 (0:00:00.279) 0:00:21.062 ********** 2025-05-03 00:56:21.594847 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.594861 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.594874 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.594887 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.594900 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.594913 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.594926 | orchestrator | 2025-05-03 00:56:21.594939 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-03 00:56:21.594952 | orchestrator | Saturday 03 May 2025 00:43:48 +0000 (0:00:01.635) 0:00:22.698 ********** 2025-05-03 00:56:21.594964 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.594977 | orchestrator | 2025-05-03 00:56:21.594991 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-03 00:56:21.595016 | orchestrator | Saturday 03 May 2025 00:43:49 +0000 (0:00:00.720) 0:00:23.418 ********** 2025-05-03 00:56:21.595030 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.595043 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.595056 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.595069 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.595082 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.595094 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.595107 | orchestrator | 2025-05-03 00:56:21.595120 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-03 00:56:21.595133 | orchestrator | Saturday 03 May 2025 00:43:50 +0000 (0:00:00.620) 0:00:24.039 ********** 2025-05-03 00:56:21.595185 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.595205 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.595218 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.595231 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.595243 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.595283 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.595297 | orchestrator | 2025-05-03 00:56:21.595309 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-03 00:56:21.595321 | orchestrator | Saturday 03 May 2025 00:43:51 +0000 (0:00:01.450) 0:00:25.489 ********** 2025-05-03 00:56:21.595334 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.595346 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.595358 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.595371 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.595383 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.595396 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.595408 | orchestrator | 2025-05-03 00:56:21.595420 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-03 00:56:21.595446 | orchestrator | Saturday 03 May 2025 00:43:52 +0000 (0:00:00.885) 0:00:26.374 ********** 2025-05-03 00:56:21.595542 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.595560 | orchestrator | 2025-05-03 00:56:21.595573 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-03 00:56:21.595585 | orchestrator | Saturday 03 May 2025 00:43:52 +0000 (0:00:00.257) 0:00:26.632 ********** 2025-05-03 00:56:21.595598 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.595610 | orchestrator | 2025-05-03 00:56:21.595622 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-03 00:56:21.595635 | orchestrator | Saturday 03 May 2025 00:43:52 +0000 (0:00:00.237) 0:00:26.869 ********** 2025-05-03 00:56:21.595647 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.595659 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.595671 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.595683 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.595705 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.595729 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.595743 | orchestrator | 2025-05-03 00:56:21.595755 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-03 00:56:21.595767 | orchestrator | Saturday 03 May 2025 00:43:53 +0000 (0:00:00.596) 0:00:27.465 ********** 2025-05-03 00:56:21.595779 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.595792 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.595804 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.595816 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.595828 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.595840 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.595852 | orchestrator | 2025-05-03 00:56:21.595865 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-03 00:56:21.595877 | orchestrator | Saturday 03 May 2025 00:43:54 +0000 (0:00:01.072) 0:00:28.538 ********** 2025-05-03 00:56:21.595893 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.595914 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.595935 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.595976 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.595990 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.596002 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.596014 | orchestrator | 2025-05-03 00:56:21.596027 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-03 00:56:21.596039 | orchestrator | Saturday 03 May 2025 00:43:55 +0000 (0:00:00.879) 0:00:29.417 ********** 2025-05-03 00:56:21.596052 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.596064 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.596076 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.596088 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.596101 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.596127 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.596140 | orchestrator | 2025-05-03 00:56:21.596152 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-03 00:56:21.596168 | orchestrator | Saturday 03 May 2025 00:43:56 +0000 (0:00:01.405) 0:00:30.822 ********** 2025-05-03 00:56:21.596182 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.596196 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.596210 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.596224 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.596238 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.596315 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.596333 | orchestrator | 2025-05-03 00:56:21.596347 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-03 00:56:21.596374 | orchestrator | Saturday 03 May 2025 00:43:57 +0000 (0:00:00.976) 0:00:31.799 ********** 2025-05-03 00:56:21.596387 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.596400 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.596412 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.596424 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.596437 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.596449 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.596461 | orchestrator | 2025-05-03 00:56:21.596480 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-03 00:56:21.596496 | orchestrator | Saturday 03 May 2025 00:43:58 +0000 (0:00:00.996) 0:00:32.796 ********** 2025-05-03 00:56:21.596517 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.596539 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.596554 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.596574 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.596588 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.596601 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.596624 | orchestrator | 2025-05-03 00:56:21.596652 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-03 00:56:21.596665 | orchestrator | Saturday 03 May 2025 00:43:59 +0000 (0:00:00.754) 0:00:33.551 ********** 2025-05-03 00:56:21.596678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.596952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d', 'scsi-SQEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_c9ced28c-7e17-4b12-aa14-5845e36ffd1d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d8d47b0-f182-4852-ad97-fcd0be00a97a', 'scsi-SQEMU_QEMU_HARDDISK_7d8d47b0-f182-4852-ad97-fcd0be00a97a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95821b4b-1055-4eda-a747-4e8f49c386b3', 'scsi-SQEMU_QEMU_HARDDISK_95821b4b-1055-4eda-a747-4e8f49c386b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb0ccd8d-cf00-4d19-a5e4-10c9d40fdd4f', 'scsi-SQEMU_QEMU_HARDDISK_eb0ccd8d-cf00-4d19-a5e4-10c9d40fdd4f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ee89ec0-10a2-40d4-b2a3-ab6963ecc84d', 'scsi-SQEMU_QEMU_HARDDISK_8ee89ec0-10a2-40d4-b2a3-ab6963ecc84d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb822f65-6f3e-4a32-952e-d8f6f7b2a5ab', 'scsi-SQEMU_QEMU_HARDDISK_eb822f65-6f3e-4a32-952e-d8f6f7b2a5ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597483 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.597569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efb501f0-cdfc-4df2-8f60-0563271b3e1b', 'scsi-SQEMU_QEMU_HARDDISK_efb501f0-cdfc-4df2-8f60-0563271b3e1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4092025-05-03 00:56:21 | INFO  | Task c69429d0-1ca3-4cfa-87ac-47614257638d is in state SUCCESS 2025-05-03 00:56:21.597586 | orchestrator | 6', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28', 'scsi-SQEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28-part1', 'scsi-SQEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28-part14', 'scsi-SQEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28-part15', 'scsi-SQEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28-part16', 'scsi-SQEMU_QEMU_HARDDISK_d40fd274-36d2-4e35-9a5d-9edc2ca11b28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfd0f0a1-4f20-4970-92b4-aeacbd22f937', 'scsi-SQEMU_QEMU_HARDDISK_bfd0f0a1-4f20-4970-92b4-aeacbd22f937'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_227e3005-1ee8-491d-9865-88581feda309', 'scsi-SQEMU_QEMU_HARDDISK_227e3005-1ee8-491d-9865-88581feda309'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ed49551-2949-4c54-ab15-9674e610f8a2', 'scsi-SQEMU_QEMU_HARDDISK_2ed49551-2949-4c54-ab15-9674e610f8a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.597832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eca5292b--8794--515a--ad73--b5efc7970d6a-osd--block--eca5292b--8794--515a--ad73--b5efc7970d6a', 'dm-uuid-LVM-5wi2Uys0qhygBUkChs5OXnVhGMzfG0GakB9L4O31j5FxXitHvecuhqod6eK6c34C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7a18630--ef35--59a0--a2f0--363b4ab3cd76-osd--block--a7a18630--ef35--59a0--a2f0--363b4ab3cd76', 'dm-uuid-LVM-kp9n5HxxuNkKHyP78qbcuszFm7e3CGahgfk8tFiDTv0tEIu3EeQmgY7AnN6kuQeo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.597976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598042 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.598055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--eca5292b--8794--515a--ad73--b5efc7970d6a-osd--block--eca5292b--8794--515a--ad73--b5efc7970d6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OGM7bt-6Lws-mzpe-FKub-u2Iw-7z2j-EBw5Od', 'scsi-0QEMU_QEMU_HARDDISK_60710ea4-1ba5-44da-b34f-cb4cc5f20e97', 'scsi-SQEMU_QEMU_HARDDISK_60710ea4-1ba5-44da-b34f-cb4cc5f20e97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a7a18630--ef35--59a0--a2f0--363b4ab3cd76-osd--block--a7a18630--ef35--59a0--a2f0--363b4ab3cd76'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vaDo6X-Qoz1-R4DZ-qU2b-jOxG-jAxc-s9G0Cj', 'scsi-0QEMU_QEMU_HARDDISK_494ae4e2-fb03-468f-bae0-ffa1e1c51b21', 'scsi-SQEMU_QEMU_HARDDISK_494ae4e2-fb03-468f-bae0-ffa1e1c51b21'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbc89abf-41e4-403a-af47-fe4d6db2bcc8', 'scsi-SQEMU_QEMU_HARDDISK_fbc89abf-41e4-403a-af47-fe4d6db2bcc8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba494882--e80b--5600--bb3d--47da88e10312-osd--block--ba494882--e80b--5600--bb3d--47da88e10312', 'dm-uuid-LVM-yDJJ83ZO7AdoFPcoVMO7Rk06u8j52pHc3X42D0qUuRvfNM5xXORgoiyqmQUibPgv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1900210e--f5cf--596b--8948--bbf6ca001e1a-osd--block--1900210e--f5cf--596b--8948--bbf6ca001e1a', 'dm-uuid-LVM-HYRfKS28EYFp3oxOfvep8OhgS2R4Om6mRKPWOU1bJ0PDKkMQaEu3Pm2bL5pdCURq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598533 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.598545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598555 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.598585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part1', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part14', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part15', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part16', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ba494882--e80b--5600--bb3d--47da88e10312-osd--block--ba494882--e80b--5600--bb3d--47da88e10312'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bIXglJ-OLEb-NbWl-oOub-R10M-mHYP-C3V7kQ', 'scsi-0QEMU_QEMU_HARDDISK_8a5c9f8e-4062-4859-b774-db2eb35d9068', 'scsi-SQEMU_QEMU_HARDDISK_8a5c9f8e-4062-4859-b774-db2eb35d9068'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--63c4e6bd--963b--5ec8--a8d0--e52c79716553-osd--block--63c4e6bd--963b--5ec8--a8d0--e52c79716553', 'dm-uuid-LVM-MiWTDaoZ0DQk8f75uPQZInv663LTp1egWVeD9SImLQUjMxdE5G2TMgiKy3wAIc5l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1900210e--f5cf--596b--8948--bbf6ca001e1a-osd--block--1900210e--f5cf--596b--8948--bbf6ca001e1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qVneO7-0GKD-f1CR-Y8v7-JIC4-Z8Uw-v4Jeo8', 'scsi-0QEMU_QEMU_HARDDISK_6a5303a2-e8ba-422b-a7dc-ef5d91cab650', 'scsi-SQEMU_QEMU_HARDDISK_6a5303a2-e8ba-422b-a7dc-ef5d91cab650'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0db6d06--6fa6--557d--977f--52f0cf84ead8-osd--block--f0db6d06--6fa6--557d--977f--52f0cf84ead8', 'dm-uuid-LVM-FeQ4dS3xIArOhUe4AB0NAWIFxklHuD7CePqpl3uW2Y9xGDRtr08HIoWKaUXHu5fi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78b7c3f7-b361-43c7-bb55-097042834471', 'scsi-SQEMU_QEMU_HARDDISK_78b7c3f7-b361-43c7-bb55-097042834471'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.598914 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.598931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.598949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.599050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.599077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.599105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.599121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:56:21.599132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part1', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part14', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part15', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part16', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.599201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--63c4e6bd--963b--5ec8--a8d0--e52c79716553-osd--block--63c4e6bd--963b--5ec8--a8d0--e52c79716553'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2J3MGh-yNz7-dSNS-ORTt-jcBj-2ntY-G0OcM3', 'scsi-0QEMU_QEMU_HARDDISK_626178b7-dd78-4872-a9d6-22f12232405d', 'scsi-SQEMU_QEMU_HARDDISK_626178b7-dd78-4872-a9d6-22f12232405d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.599217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f0db6d06--6fa6--557d--977f--52f0cf84ead8-osd--block--f0db6d06--6fa6--557d--977f--52f0cf84ead8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ijisTF-blyr-zhT5-NEtO-Qk9g-ruUm-tjRUjw', 'scsi-0QEMU_QEMU_HARDDISK_cf08c0e7-08ad-4d2d-8710-ce05fc114cf2', 'scsi-SQEMU_QEMU_HARDDISK_cf08c0e7-08ad-4d2d-8710-ce05fc114cf2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.599234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_592c23d3-c323-4834-ad18-db1726824a9d', 'scsi-SQEMU_QEMU_HARDDISK_592c23d3-c323-4834-ad18-db1726824a9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.599246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:56:21.599328 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.599367 | orchestrator | 2025-05-03 00:56:21.599379 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-03 00:56:21.599390 | orchestrator | Saturday 03 May 2025 00:44:01 +0000 (0:00:01.932) 0:00:35.483 ********** 2025-05-03 00:56:21.599422 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.599433 | orchestrator | 2025-05-03 00:56:21.599443 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-03 00:56:21.599453 | orchestrator | Saturday 03 May 2025 00:44:01 +0000 (0:00:00.339) 0:00:35.822 ********** 2025-05-03 00:56:21.599463 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.599473 | orchestrator | 2025-05-03 00:56:21.599483 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-03 00:56:21.599493 | orchestrator | Saturday 03 May 2025 00:44:01 +0000 (0:00:00.141) 0:00:35.963 ********** 2025-05-03 00:56:21.599503 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.599513 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.599523 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.599534 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.599544 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.599553 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.599563 | orchestrator | 2025-05-03 00:56:21.599573 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-03 00:56:21.599583 | orchestrator | Saturday 03 May 2025 00:44:02 +0000 (0:00:00.769) 0:00:36.733 ********** 2025-05-03 00:56:21.599593 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.599604 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.599614 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.599624 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.599634 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.599643 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.599654 | orchestrator | 2025-05-03 00:56:21.599664 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-03 00:56:21.599682 | orchestrator | Saturday 03 May 2025 00:44:04 +0000 (0:00:01.800) 0:00:38.533 ********** 2025-05-03 00:56:21.599692 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.599702 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.599712 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.599722 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.599732 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.599742 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.599751 | orchestrator | 2025-05-03 00:56:21.599762 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-03 00:56:21.599772 | orchestrator | Saturday 03 May 2025 00:44:05 +0000 (0:00:00.902) 0:00:39.436 ********** 2025-05-03 00:56:21.599782 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.599870 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.599885 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.599894 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.599904 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.599914 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.599923 | orchestrator | 2025-05-03 00:56:21.599933 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-03 00:56:21.599943 | orchestrator | Saturday 03 May 2025 00:44:06 +0000 (0:00:01.344) 0:00:40.781 ********** 2025-05-03 00:56:21.599953 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.599963 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.599972 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.599982 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.599992 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.600010 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.600020 | orchestrator | 2025-05-03 00:56:21.600030 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-03 00:56:21.600040 | orchestrator | Saturday 03 May 2025 00:44:07 +0000 (0:00:00.952) 0:00:41.733 ********** 2025-05-03 00:56:21.600050 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.600060 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.600069 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.600079 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.600089 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.600101 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.600116 | orchestrator | 2025-05-03 00:56:21.600130 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-03 00:56:21.600143 | orchestrator | Saturday 03 May 2025 00:44:09 +0000 (0:00:01.407) 0:00:43.141 ********** 2025-05-03 00:56:21.600157 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.600171 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.600194 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.600210 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.600225 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.600241 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.600275 | orchestrator | 2025-05-03 00:56:21.600301 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-03 00:56:21.600310 | orchestrator | Saturday 03 May 2025 00:44:09 +0000 (0:00:00.773) 0:00:43.915 ********** 2025-05-03 00:56:21.600319 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-03 00:56:21.600328 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.600337 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-03 00:56:21.600345 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-03 00:56:21.600354 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-03 00:56:21.600363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.600372 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-03 00:56:21.600380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:56:21.600404 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.600413 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-03 00:56:21.600422 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.600430 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.600439 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.600447 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:56:21.600456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:56:21.600464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:56:21.600473 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.600481 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:56:21.600490 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:56:21.600498 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:56:21.600506 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:56:21.600515 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.600523 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:56:21.600532 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.600541 | orchestrator | 2025-05-03 00:56:21.600549 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-03 00:56:21.600558 | orchestrator | Saturday 03 May 2025 00:44:12 +0000 (0:00:02.335) 0:00:46.251 ********** 2025-05-03 00:56:21.600566 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.600575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.600583 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-03 00:56:21.600592 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-03 00:56:21.600600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.600609 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-03 00:56:21.600617 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-03 00:56:21.600625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:56:21.600634 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.600642 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-03 00:56:21.600651 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-03 00:56:21.600659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:56:21.600667 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.600676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:56:21.600684 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.600693 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.600701 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:56:21.600710 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:56:21.600782 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:56:21.600794 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:56:21.600803 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.600812 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:56:21.600820 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:56:21.600829 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.600837 | orchestrator | 2025-05-03 00:56:21.600846 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-03 00:56:21.600854 | orchestrator | Saturday 03 May 2025 00:44:15 +0000 (0:00:02.982) 0:00:49.233 ********** 2025-05-03 00:56:21.600863 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:56:21.600871 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-03 00:56:21.600885 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-03 00:56:21.600894 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-03 00:56:21.600902 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-03 00:56:21.600911 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-03 00:56:21.600919 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-03 00:56:21.600928 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-03 00:56:21.600936 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-03 00:56:21.600945 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-03 00:56:21.600953 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-03 00:56:21.600962 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-03 00:56:21.600970 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-03 00:56:21.600979 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-03 00:56:21.600987 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-03 00:56:21.600996 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-03 00:56:21.601004 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-03 00:56:21.601012 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-03 00:56:21.601021 | orchestrator | 2025-05-03 00:56:21.601029 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-03 00:56:21.601038 | orchestrator | Saturday 03 May 2025 00:44:19 +0000 (0:00:04.675) 0:00:53.908 ********** 2025-05-03 00:56:21.601047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.601055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.601064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.601072 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-03 00:56:21.601080 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-03 00:56:21.601093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-03 00:56:21.601102 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.601110 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-03 00:56:21.601119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-03 00:56:21.601127 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.601136 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-03 00:56:21.601149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:56:21.601157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:56:21.601166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:56:21.601174 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.601183 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:56:21.601191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:56:21.601200 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.601208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:56:21.601217 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.601225 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:56:21.601233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:56:21.601242 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:56:21.601267 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.601277 | orchestrator | 2025-05-03 00:56:21.601286 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-03 00:56:21.601295 | orchestrator | Saturday 03 May 2025 00:44:21 +0000 (0:00:01.166) 0:00:55.074 ********** 2025-05-03 00:56:21.601303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.601315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.601329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.601338 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-03 00:56:21.601346 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-03 00:56:21.601355 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-03 00:56:21.601363 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.601372 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.601380 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-03 00:56:21.601389 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-03 00:56:21.601397 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-03 00:56:21.601406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:56:21.601415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:56:21.601426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:56:21.601486 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.601499 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:56:21.601509 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.601518 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:56:21.601528 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:56:21.601538 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.601548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:56:21.601557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:56:21.601567 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:56:21.601576 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.601586 | orchestrator | 2025-05-03 00:56:21.601595 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-03 00:56:21.601605 | orchestrator | Saturday 03 May 2025 00:44:22 +0000 (0:00:01.142) 0:00:56.217 ********** 2025-05-03 00:56:21.601615 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-03 00:56:21.601625 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:56:21.601635 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:56:21.601645 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-03 00:56:21.601655 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-03 00:56:21.601665 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-03 00:56:21.601675 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:56:21.601685 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:56:21.601694 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-03 00:56:21.601704 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-03 00:56:21.601714 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:56:21.601724 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:56:21.601733 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-03 00:56:21.601743 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:56:21.601753 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:56:21.601768 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.601777 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.601786 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-03 00:56:21.601795 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:56:21.601803 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:56:21.601812 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.601821 | orchestrator | 2025-05-03 00:56:21.601832 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-03 00:56:21.601846 | orchestrator | Saturday 03 May 2025 00:44:23 +0000 (0:00:01.278) 0:00:57.496 ********** 2025-05-03 00:56:21.601861 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.601873 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.601882 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.601905 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.601914 | orchestrator | 2025-05-03 00:56:21.601923 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.601932 | orchestrator | Saturday 03 May 2025 00:44:24 +0000 (0:00:01.475) 0:00:58.971 ********** 2025-05-03 00:56:21.601940 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.601949 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.601957 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.601966 | orchestrator | 2025-05-03 00:56:21.601974 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.601983 | orchestrator | Saturday 03 May 2025 00:44:25 +0000 (0:00:00.791) 0:00:59.763 ********** 2025-05-03 00:56:21.601991 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602000 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.602008 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.602056 | orchestrator | 2025-05-03 00:56:21.602067 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.602076 | orchestrator | Saturday 03 May 2025 00:44:26 +0000 (0:00:00.727) 0:01:00.491 ********** 2025-05-03 00:56:21.602085 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602093 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.602102 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.602111 | orchestrator | 2025-05-03 00:56:21.602119 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.602128 | orchestrator | Saturday 03 May 2025 00:44:27 +0000 (0:00:00.603) 0:01:01.094 ********** 2025-05-03 00:56:21.602137 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.602146 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.602209 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.602221 | orchestrator | 2025-05-03 00:56:21.602230 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.602239 | orchestrator | Saturday 03 May 2025 00:44:28 +0000 (0:00:00.922) 0:01:02.017 ********** 2025-05-03 00:56:21.602247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.602321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.602334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.602342 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602351 | orchestrator | 2025-05-03 00:56:21.602359 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.602368 | orchestrator | Saturday 03 May 2025 00:44:28 +0000 (0:00:00.743) 0:01:02.761 ********** 2025-05-03 00:56:21.602376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.602384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.602399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.602407 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602415 | orchestrator | 2025-05-03 00:56:21.602423 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.602431 | orchestrator | Saturday 03 May 2025 00:44:29 +0000 (0:00:00.686) 0:01:03.448 ********** 2025-05-03 00:56:21.602439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.602446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.602454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.602462 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602476 | orchestrator | 2025-05-03 00:56:21.602484 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.602492 | orchestrator | Saturday 03 May 2025 00:44:30 +0000 (0:00:01.257) 0:01:04.705 ********** 2025-05-03 00:56:21.602500 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.602508 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.602520 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.602528 | orchestrator | 2025-05-03 00:56:21.602536 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.602544 | orchestrator | Saturday 03 May 2025 00:44:31 +0000 (0:00:00.604) 0:01:05.310 ********** 2025-05-03 00:56:21.602552 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-03 00:56:21.602560 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-03 00:56:21.602568 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-03 00:56:21.602576 | orchestrator | 2025-05-03 00:56:21.602584 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.602592 | orchestrator | Saturday 03 May 2025 00:44:32 +0000 (0:00:01.040) 0:01:06.350 ********** 2025-05-03 00:56:21.602599 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602607 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.602615 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.602624 | orchestrator | 2025-05-03 00:56:21.602631 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.602639 | orchestrator | Saturday 03 May 2025 00:44:32 +0000 (0:00:00.530) 0:01:06.881 ********** 2025-05-03 00:56:21.602647 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602655 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.602663 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.602671 | orchestrator | 2025-05-03 00:56:21.602679 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.602687 | orchestrator | Saturday 03 May 2025 00:44:33 +0000 (0:00:00.668) 0:01:07.550 ********** 2025-05-03 00:56:21.602695 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.602703 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602711 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.602719 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.602727 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.602735 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.602743 | orchestrator | 2025-05-03 00:56:21.602751 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.602759 | orchestrator | Saturday 03 May 2025 00:44:34 +0000 (0:00:00.791) 0:01:08.341 ********** 2025-05-03 00:56:21.602767 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.602775 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602783 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.602791 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.602800 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.602812 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.602819 | orchestrator | 2025-05-03 00:56:21.602831 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.602839 | orchestrator | Saturday 03 May 2025 00:44:35 +0000 (0:00:00.833) 0:01:09.174 ********** 2025-05-03 00:56:21.602849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.602858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.602867 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:56:21.602876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.602886 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.602895 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:56:21.602904 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:56:21.602967 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:56:21.602979 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.602989 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:56:21.602997 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:56:21.603006 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.603014 | orchestrator | 2025-05-03 00:56:21.603023 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-03 00:56:21.603031 | orchestrator | Saturday 03 May 2025 00:44:36 +0000 (0:00:01.063) 0:01:10.238 ********** 2025-05-03 00:56:21.603039 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.603046 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.603054 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.603062 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.603070 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.603078 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.603086 | orchestrator | 2025-05-03 00:56:21.603094 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-03 00:56:21.603102 | orchestrator | Saturday 03 May 2025 00:44:37 +0000 (0:00:00.973) 0:01:11.212 ********** 2025-05-03 00:56:21.603110 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:56:21.603118 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:56:21.603126 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:56:21.603134 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-03 00:56:21.603141 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-03 00:56:21.603149 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-03 00:56:21.603157 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-03 00:56:21.603165 | orchestrator | 2025-05-03 00:56:21.603173 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-03 00:56:21.603181 | orchestrator | Saturday 03 May 2025 00:44:38 +0000 (0:00:00.872) 0:01:12.085 ********** 2025-05-03 00:56:21.603189 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:56:21.603197 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:56:21.603205 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:56:21.603212 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-03 00:56:21.603220 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-03 00:56:21.603228 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-03 00:56:21.603236 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-03 00:56:21.603268 | orchestrator | 2025-05-03 00:56:21.603282 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-03 00:56:21.603296 | orchestrator | Saturday 03 May 2025 00:44:40 +0000 (0:00:01.999) 0:01:14.085 ********** 2025-05-03 00:56:21.603321 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.603331 | orchestrator | 2025-05-03 00:56:21.603339 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-03 00:56:21.603347 | orchestrator | Saturday 03 May 2025 00:44:41 +0000 (0:00:01.473) 0:01:15.558 ********** 2025-05-03 00:56:21.603355 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.603363 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.603371 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.603383 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.603397 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.603409 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.603417 | orchestrator | 2025-05-03 00:56:21.603426 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-03 00:56:21.603434 | orchestrator | Saturday 03 May 2025 00:44:42 +0000 (0:00:01.125) 0:01:16.684 ********** 2025-05-03 00:56:21.603442 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.603450 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.603457 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.603465 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.603473 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.603481 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.603489 | orchestrator | 2025-05-03 00:56:21.603497 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-03 00:56:21.603505 | orchestrator | Saturday 03 May 2025 00:44:43 +0000 (0:00:01.232) 0:01:17.917 ********** 2025-05-03 00:56:21.603512 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.603521 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.603528 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.603536 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.603544 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.603552 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.603560 | orchestrator | 2025-05-03 00:56:21.603568 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-03 00:56:21.603576 | orchestrator | Saturday 03 May 2025 00:44:45 +0000 (0:00:01.420) 0:01:19.338 ********** 2025-05-03 00:56:21.603584 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.603592 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.603599 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.603607 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.603615 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.603623 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.603631 | orchestrator | 2025-05-03 00:56:21.603639 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-03 00:56:21.603698 | orchestrator | Saturday 03 May 2025 00:44:46 +0000 (0:00:01.469) 0:01:20.807 ********** 2025-05-03 00:56:21.603710 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.603718 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.603731 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.603739 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.603747 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.603754 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.603762 | orchestrator | 2025-05-03 00:56:21.603773 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-03 00:56:21.603782 | orchestrator | Saturday 03 May 2025 00:44:47 +0000 (0:00:00.746) 0:01:21.554 ********** 2025-05-03 00:56:21.603789 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.603797 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.603814 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.603822 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.603830 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.603838 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.603846 | orchestrator | 2025-05-03 00:56:21.603854 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-03 00:56:21.603861 | orchestrator | Saturday 03 May 2025 00:44:48 +0000 (0:00:00.904) 0:01:22.458 ********** 2025-05-03 00:56:21.603869 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.603877 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.603885 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.603893 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.603901 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.603908 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.603916 | orchestrator | 2025-05-03 00:56:21.603924 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-03 00:56:21.603932 | orchestrator | Saturday 03 May 2025 00:44:49 +0000 (0:00:00.880) 0:01:23.339 ********** 2025-05-03 00:56:21.603940 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.603948 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.603955 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.603963 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.603971 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.603979 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.603987 | orchestrator | 2025-05-03 00:56:21.603995 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-03 00:56:21.604003 | orchestrator | Saturday 03 May 2025 00:44:50 +0000 (0:00:00.861) 0:01:24.201 ********** 2025-05-03 00:56:21.604011 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604018 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604026 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604034 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.604042 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.604049 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.604057 | orchestrator | 2025-05-03 00:56:21.604065 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-03 00:56:21.604073 | orchestrator | Saturday 03 May 2025 00:44:50 +0000 (0:00:00.771) 0:01:24.973 ********** 2025-05-03 00:56:21.604081 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604089 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604096 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604104 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.604112 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.604120 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.604127 | orchestrator | 2025-05-03 00:56:21.604135 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-03 00:56:21.604143 | orchestrator | Saturday 03 May 2025 00:44:52 +0000 (0:00:01.286) 0:01:26.259 ********** 2025-05-03 00:56:21.604151 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.604159 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.604167 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.604175 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.604182 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.604190 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.604198 | orchestrator | 2025-05-03 00:56:21.604206 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-03 00:56:21.604214 | orchestrator | Saturday 03 May 2025 00:44:53 +0000 (0:00:01.320) 0:01:27.580 ********** 2025-05-03 00:56:21.604222 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604230 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604237 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604245 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.604301 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.604311 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.604324 | orchestrator | 2025-05-03 00:56:21.604332 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-03 00:56:21.604340 | orchestrator | Saturday 03 May 2025 00:44:54 +0000 (0:00:00.887) 0:01:28.467 ********** 2025-05-03 00:56:21.604348 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.604356 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.604364 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.604372 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.604380 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.604388 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.604401 | orchestrator | 2025-05-03 00:56:21.604409 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-03 00:56:21.604417 | orchestrator | Saturday 03 May 2025 00:44:55 +0000 (0:00:01.389) 0:01:29.856 ********** 2025-05-03 00:56:21.604425 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604434 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604442 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604450 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.604458 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.604466 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.604473 | orchestrator | 2025-05-03 00:56:21.604481 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-03 00:56:21.604489 | orchestrator | Saturday 03 May 2025 00:44:56 +0000 (0:00:01.095) 0:01:30.952 ********** 2025-05-03 00:56:21.604497 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604505 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604513 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604521 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.604528 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.604536 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.604544 | orchestrator | 2025-05-03 00:56:21.604603 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-03 00:56:21.604614 | orchestrator | Saturday 03 May 2025 00:44:57 +0000 (0:00:00.748) 0:01:31.700 ********** 2025-05-03 00:56:21.604622 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604629 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604635 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604642 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.604649 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.604656 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.604663 | orchestrator | 2025-05-03 00:56:21.604670 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-03 00:56:21.604677 | orchestrator | Saturday 03 May 2025 00:44:58 +0000 (0:00:00.681) 0:01:32.382 ********** 2025-05-03 00:56:21.604683 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604690 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604697 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604704 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.604711 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.604717 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.604724 | orchestrator | 2025-05-03 00:56:21.604731 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-03 00:56:21.604738 | orchestrator | Saturday 03 May 2025 00:44:58 +0000 (0:00:00.447) 0:01:32.830 ********** 2025-05-03 00:56:21.604745 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604752 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604759 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604765 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.604772 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.604779 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.604786 | orchestrator | 2025-05-03 00:56:21.604793 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-03 00:56:21.604800 | orchestrator | Saturday 03 May 2025 00:44:59 +0000 (0:00:00.795) 0:01:33.625 ********** 2025-05-03 00:56:21.604811 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.604818 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.604825 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.604832 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.604839 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.604846 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.604852 | orchestrator | 2025-05-03 00:56:21.604859 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-03 00:56:21.604866 | orchestrator | Saturday 03 May 2025 00:45:00 +0000 (0:00:00.762) 0:01:34.388 ********** 2025-05-03 00:56:21.604873 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.604880 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.604887 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.604893 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.604900 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.604907 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.604914 | orchestrator | 2025-05-03 00:56:21.604926 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-03 00:56:21.604943 | orchestrator | Saturday 03 May 2025 00:45:01 +0000 (0:00:00.955) 0:01:35.343 ********** 2025-05-03 00:56:21.604953 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.604975 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.604982 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.604989 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.604996 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605003 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605010 | orchestrator | 2025-05-03 00:56:21.605017 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-03 00:56:21.605024 | orchestrator | Saturday 03 May 2025 00:45:01 +0000 (0:00:00.600) 0:01:35.943 ********** 2025-05-03 00:56:21.605031 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605038 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605048 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605055 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605062 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605069 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605076 | orchestrator | 2025-05-03 00:56:21.605083 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-03 00:56:21.605090 | orchestrator | Saturday 03 May 2025 00:45:02 +0000 (0:00:00.695) 0:01:36.638 ********** 2025-05-03 00:56:21.605097 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605103 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605110 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605117 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605124 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605131 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605138 | orchestrator | 2025-05-03 00:56:21.605144 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-03 00:56:21.605151 | orchestrator | Saturday 03 May 2025 00:45:03 +0000 (0:00:00.548) 0:01:37.187 ********** 2025-05-03 00:56:21.605158 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605165 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605172 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605179 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605186 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605193 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605200 | orchestrator | 2025-05-03 00:56:21.605207 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-03 00:56:21.605214 | orchestrator | Saturday 03 May 2025 00:45:03 +0000 (0:00:00.769) 0:01:37.957 ********** 2025-05-03 00:56:21.605220 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605227 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605234 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605245 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605267 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605275 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605282 | orchestrator | 2025-05-03 00:56:21.605289 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-03 00:56:21.605296 | orchestrator | Saturday 03 May 2025 00:45:04 +0000 (0:00:00.691) 0:01:38.648 ********** 2025-05-03 00:56:21.605303 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605310 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605317 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605369 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605379 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605386 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605393 | orchestrator | 2025-05-03 00:56:21.605400 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-03 00:56:21.605407 | orchestrator | Saturday 03 May 2025 00:45:05 +0000 (0:00:00.808) 0:01:39.457 ********** 2025-05-03 00:56:21.605414 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605421 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605428 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605435 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605442 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605449 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605455 | orchestrator | 2025-05-03 00:56:21.605462 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-03 00:56:21.605470 | orchestrator | Saturday 03 May 2025 00:45:06 +0000 (0:00:00.566) 0:01:40.024 ********** 2025-05-03 00:56:21.605477 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605484 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605490 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605497 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605504 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605511 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605517 | orchestrator | 2025-05-03 00:56:21.605524 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-03 00:56:21.605531 | orchestrator | Saturday 03 May 2025 00:45:06 +0000 (0:00:00.846) 0:01:40.871 ********** 2025-05-03 00:56:21.605538 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605545 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605552 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605559 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605566 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605573 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605579 | orchestrator | 2025-05-03 00:56:21.605586 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-03 00:56:21.605594 | orchestrator | Saturday 03 May 2025 00:45:07 +0000 (0:00:00.801) 0:01:41.672 ********** 2025-05-03 00:56:21.605604 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605611 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605618 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605625 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605632 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605642 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605649 | orchestrator | 2025-05-03 00:56:21.605655 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-03 00:56:21.605663 | orchestrator | Saturday 03 May 2025 00:45:08 +0000 (0:00:01.092) 0:01:42.765 ********** 2025-05-03 00:56:21.605670 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605676 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605683 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605690 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605697 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605708 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605715 | orchestrator | 2025-05-03 00:56:21.605722 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-03 00:56:21.605729 | orchestrator | Saturday 03 May 2025 00:45:09 +0000 (0:00:00.845) 0:01:43.611 ********** 2025-05-03 00:56:21.605736 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605743 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605750 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605757 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605763 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605770 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605777 | orchestrator | 2025-05-03 00:56:21.605784 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-03 00:56:21.605791 | orchestrator | Saturday 03 May 2025 00:45:10 +0000 (0:00:00.806) 0:01:44.417 ********** 2025-05-03 00:56:21.605798 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.605805 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.605812 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605819 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.605826 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.605832 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.605839 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.605846 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.605853 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.605860 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.605867 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.605874 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.605881 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.605887 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.605894 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.605901 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.605911 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.605918 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.605925 | orchestrator | 2025-05-03 00:56:21.605932 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-03 00:56:21.605939 | orchestrator | Saturday 03 May 2025 00:45:11 +0000 (0:00:00.660) 0:01:45.078 ********** 2025-05-03 00:56:21.605946 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-03 00:56:21.605953 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-03 00:56:21.605960 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.605967 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-03 00:56:21.605974 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-03 00:56:21.605981 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606052 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-03 00:56:21.606065 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-03 00:56:21.606073 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606081 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-03 00:56:21.606089 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-03 00:56:21.606097 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606105 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-03 00:56:21.606112 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-03 00:56:21.606120 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606128 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-03 00:56:21.606136 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-03 00:56:21.606148 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606156 | orchestrator | 2025-05-03 00:56:21.606164 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-03 00:56:21.606172 | orchestrator | Saturday 03 May 2025 00:45:11 +0000 (0:00:00.854) 0:01:45.932 ********** 2025-05-03 00:56:21.606180 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606188 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606196 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606204 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606212 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606219 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606227 | orchestrator | 2025-05-03 00:56:21.606235 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-03 00:56:21.606243 | orchestrator | Saturday 03 May 2025 00:45:12 +0000 (0:00:00.559) 0:01:46.491 ********** 2025-05-03 00:56:21.606265 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606277 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606289 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606301 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606313 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606321 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606328 | orchestrator | 2025-05-03 00:56:21.606335 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.606343 | orchestrator | Saturday 03 May 2025 00:45:13 +0000 (0:00:00.667) 0:01:47.159 ********** 2025-05-03 00:56:21.606349 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606356 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606363 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606370 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606377 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606384 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606391 | orchestrator | 2025-05-03 00:56:21.606397 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.606404 | orchestrator | Saturday 03 May 2025 00:45:13 +0000 (0:00:00.544) 0:01:47.703 ********** 2025-05-03 00:56:21.606411 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606418 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606425 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606435 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606447 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606458 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606468 | orchestrator | 2025-05-03 00:56:21.606489 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.606496 | orchestrator | Saturday 03 May 2025 00:45:14 +0000 (0:00:00.851) 0:01:48.555 ********** 2025-05-03 00:56:21.606503 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606510 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606521 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606528 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606535 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606541 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606548 | orchestrator | 2025-05-03 00:56:21.606558 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.606565 | orchestrator | Saturday 03 May 2025 00:45:15 +0000 (0:00:00.786) 0:01:49.341 ********** 2025-05-03 00:56:21.606572 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606578 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606585 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606592 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606599 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606605 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606612 | orchestrator | 2025-05-03 00:56:21.606624 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.606631 | orchestrator | Saturday 03 May 2025 00:45:16 +0000 (0:00:00.899) 0:01:50.241 ********** 2025-05-03 00:56:21.606638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.606645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.606652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.606658 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606665 | orchestrator | 2025-05-03 00:56:21.606672 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.606679 | orchestrator | Saturday 03 May 2025 00:45:16 +0000 (0:00:00.453) 0:01:50.695 ********** 2025-05-03 00:56:21.606686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.606693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.606700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.606707 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606713 | orchestrator | 2025-05-03 00:56:21.606720 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.606727 | orchestrator | Saturday 03 May 2025 00:45:17 +0000 (0:00:00.582) 0:01:51.277 ********** 2025-05-03 00:56:21.606734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.606792 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.606803 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.606810 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606817 | orchestrator | 2025-05-03 00:56:21.606824 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.606830 | orchestrator | Saturday 03 May 2025 00:45:17 +0000 (0:00:00.452) 0:01:51.730 ********** 2025-05-03 00:56:21.606837 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606844 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606851 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606858 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606865 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606872 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606879 | orchestrator | 2025-05-03 00:56:21.606886 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.606893 | orchestrator | Saturday 03 May 2025 00:45:18 +0000 (0:00:00.630) 0:01:52.360 ********** 2025-05-03 00:56:21.606899 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.606906 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.606913 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.606920 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.606927 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.606934 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.606941 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.606947 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.606954 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.606961 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.606968 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.606975 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.606982 | orchestrator | 2025-05-03 00:56:21.606989 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.606996 | orchestrator | Saturday 03 May 2025 00:45:19 +0000 (0:00:01.282) 0:01:53.643 ********** 2025-05-03 00:56:21.607003 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607009 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607016 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607023 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607030 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607046 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607053 | orchestrator | 2025-05-03 00:56:21.607060 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.607067 | orchestrator | Saturday 03 May 2025 00:45:20 +0000 (0:00:00.902) 0:01:54.545 ********** 2025-05-03 00:56:21.607074 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607081 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607088 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607095 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607102 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607109 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607116 | orchestrator | 2025-05-03 00:56:21.607122 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.607129 | orchestrator | Saturday 03 May 2025 00:45:21 +0000 (0:00:00.638) 0:01:55.184 ********** 2025-05-03 00:56:21.607136 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.607143 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607150 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.607157 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607164 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.607170 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607177 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.607184 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607191 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.607198 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607205 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.607211 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607218 | orchestrator | 2025-05-03 00:56:21.607225 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.607232 | orchestrator | Saturday 03 May 2025 00:45:22 +0000 (0:00:01.274) 0:01:56.459 ********** 2025-05-03 00:56:21.607239 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607246 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607267 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607275 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.607282 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607293 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.607300 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607307 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.607314 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607321 | orchestrator | 2025-05-03 00:56:21.607328 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.607335 | orchestrator | Saturday 03 May 2025 00:45:23 +0000 (0:00:00.664) 0:01:57.123 ********** 2025-05-03 00:56:21.607342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.607349 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.607356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.607363 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607370 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-03 00:56:21.607377 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-03 00:56:21.607384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-03 00:56:21.607443 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-03 00:56:21.607458 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-03 00:56:21.607470 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-03 00:56:21.607477 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.607491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.607498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.607505 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:56:21.607523 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:56:21.607535 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:56:21.607548 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607556 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:56:21.607570 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:56:21.607577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:56:21.607584 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607591 | orchestrator | 2025-05-03 00:56:21.607598 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-03 00:56:21.607605 | orchestrator | Saturday 03 May 2025 00:45:24 +0000 (0:00:01.611) 0:01:58.735 ********** 2025-05-03 00:56:21.607612 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607619 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607626 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607633 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607640 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607647 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607755 | orchestrator | 2025-05-03 00:56:21.607781 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-03 00:56:21.607789 | orchestrator | Saturday 03 May 2025 00:45:26 +0000 (0:00:01.434) 0:02:00.169 ********** 2025-05-03 00:56:21.607796 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607802 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607809 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607816 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.607823 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607830 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-03 00:56:21.607837 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607844 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-03 00:56:21.607851 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607858 | orchestrator | 2025-05-03 00:56:21.607865 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-03 00:56:21.607872 | orchestrator | Saturday 03 May 2025 00:45:27 +0000 (0:00:01.497) 0:02:01.666 ********** 2025-05-03 00:56:21.607879 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607886 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607893 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607899 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607906 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607913 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607920 | orchestrator | 2025-05-03 00:56:21.607927 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-03 00:56:21.607934 | orchestrator | Saturday 03 May 2025 00:45:28 +0000 (0:00:01.237) 0:02:02.903 ********** 2025-05-03 00:56:21.607941 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.607948 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.607955 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.607962 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.607972 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.607979 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.607991 | orchestrator | 2025-05-03 00:56:21.607998 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-03 00:56:21.608005 | orchestrator | Saturday 03 May 2025 00:45:30 +0000 (0:00:01.237) 0:02:04.141 ********** 2025-05-03 00:56:21.608012 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.608019 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.608026 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.608033 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.608040 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.608046 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.608053 | orchestrator | 2025-05-03 00:56:21.608063 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-03 00:56:21.608071 | orchestrator | Saturday 03 May 2025 00:45:31 +0000 (0:00:01.442) 0:02:05.584 ********** 2025-05-03 00:56:21.608079 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.608087 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.608095 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.608103 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.608111 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.608118 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.608126 | orchestrator | 2025-05-03 00:56:21.608134 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-03 00:56:21.608142 | orchestrator | Saturday 03 May 2025 00:45:33 +0000 (0:00:01.927) 0:02:07.511 ********** 2025-05-03 00:56:21.608150 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.608158 | orchestrator | 2025-05-03 00:56:21.608166 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-03 00:56:21.608174 | orchestrator | Saturday 03 May 2025 00:45:34 +0000 (0:00:01.457) 0:02:08.969 ********** 2025-05-03 00:56:21.608182 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.608189 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.608197 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.608300 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.608314 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.608322 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.608329 | orchestrator | 2025-05-03 00:56:21.608337 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-03 00:56:21.608346 | orchestrator | Saturday 03 May 2025 00:45:35 +0000 (0:00:00.983) 0:02:09.952 ********** 2025-05-03 00:56:21.608354 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.608362 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.608368 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.608375 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.608382 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.608393 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.608400 | orchestrator | 2025-05-03 00:56:21.608407 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-03 00:56:21.608414 | orchestrator | Saturday 03 May 2025 00:45:36 +0000 (0:00:00.653) 0:02:10.605 ********** 2025-05-03 00:56:21.608421 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-03 00:56:21.608429 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-03 00:56:21.608435 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-03 00:56:21.608441 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-03 00:56:21.608447 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-03 00:56:21.608453 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-03 00:56:21.608459 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-03 00:56:21.608470 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-03 00:56:21.608477 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-03 00:56:21.608483 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-03 00:56:21.608489 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-03 00:56:21.608495 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-03 00:56:21.608501 | orchestrator | 2025-05-03 00:56:21.608507 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-03 00:56:21.608513 | orchestrator | Saturday 03 May 2025 00:45:38 +0000 (0:00:02.039) 0:02:12.645 ********** 2025-05-03 00:56:21.608519 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.608526 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.608532 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.608538 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.608544 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.608550 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.608556 | orchestrator | 2025-05-03 00:56:21.608562 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-03 00:56:21.608568 | orchestrator | Saturday 03 May 2025 00:45:39 +0000 (0:00:01.030) 0:02:13.675 ********** 2025-05-03 00:56:21.608574 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.608581 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.608587 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.608593 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.608599 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.608605 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.608611 | orchestrator | 2025-05-03 00:56:21.608617 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-03 00:56:21.608624 | orchestrator | Saturday 03 May 2025 00:45:40 +0000 (0:00:00.853) 0:02:14.529 ********** 2025-05-03 00:56:21.608630 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.608636 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.608642 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.608648 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.608654 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.608660 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.608666 | orchestrator | 2025-05-03 00:56:21.608673 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-03 00:56:21.608679 | orchestrator | Saturday 03 May 2025 00:45:41 +0000 (0:00:00.649) 0:02:15.178 ********** 2025-05-03 00:56:21.608685 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.608692 | orchestrator | 2025-05-03 00:56:21.608698 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-03 00:56:21.608704 | orchestrator | Saturday 03 May 2025 00:45:42 +0000 (0:00:01.346) 0:02:16.525 ********** 2025-05-03 00:56:21.608710 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.608716 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.608723 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.608729 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.608735 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.608741 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.608747 | orchestrator | 2025-05-03 00:56:21.608756 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-03 00:56:21.608762 | orchestrator | Saturday 03 May 2025 00:46:27 +0000 (0:00:45.311) 0:03:01.836 ********** 2025-05-03 00:56:21.608768 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-03 00:56:21.608775 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-03 00:56:21.608785 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-03 00:56:21.608791 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.608836 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-03 00:56:21.608845 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-03 00:56:21.608852 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-03 00:56:21.608858 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.608864 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-03 00:56:21.608870 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-03 00:56:21.608876 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-03 00:56:21.608883 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.608889 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-03 00:56:21.608895 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-03 00:56:21.608901 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-03 00:56:21.608907 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.608913 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-03 00:56:21.608919 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-03 00:56:21.608925 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-03 00:56:21.608931 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.608938 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-03 00:56:21.608944 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-03 00:56:21.608950 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-03 00:56:21.608956 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.608962 | orchestrator | 2025-05-03 00:56:21.608968 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-03 00:56:21.608974 | orchestrator | Saturday 03 May 2025 00:46:28 +0000 (0:00:00.988) 0:03:02.825 ********** 2025-05-03 00:56:21.608981 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.608987 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.608993 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.608999 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609005 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609016 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609027 | orchestrator | 2025-05-03 00:56:21.609040 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-03 00:56:21.609058 | orchestrator | Saturday 03 May 2025 00:46:29 +0000 (0:00:00.832) 0:03:03.657 ********** 2025-05-03 00:56:21.609068 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609076 | orchestrator | 2025-05-03 00:56:21.609086 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-03 00:56:21.609095 | orchestrator | Saturday 03 May 2025 00:46:29 +0000 (0:00:00.311) 0:03:03.969 ********** 2025-05-03 00:56:21.609104 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609113 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609122 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609131 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609142 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609151 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609160 | orchestrator | 2025-05-03 00:56:21.609169 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-03 00:56:21.609179 | orchestrator | Saturday 03 May 2025 00:46:30 +0000 (0:00:00.689) 0:03:04.658 ********** 2025-05-03 00:56:21.609198 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609207 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609218 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609228 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609236 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609246 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609270 | orchestrator | 2025-05-03 00:56:21.609281 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-03 00:56:21.609290 | orchestrator | Saturday 03 May 2025 00:46:31 +0000 (0:00:00.973) 0:03:05.632 ********** 2025-05-03 00:56:21.609301 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609327 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609333 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609339 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609345 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609351 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609358 | orchestrator | 2025-05-03 00:56:21.609364 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-03 00:56:21.609373 | orchestrator | Saturday 03 May 2025 00:46:32 +0000 (0:00:00.616) 0:03:06.248 ********** 2025-05-03 00:56:21.609379 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.609385 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.609391 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.609397 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.609403 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.609410 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.609416 | orchestrator | 2025-05-03 00:56:21.609422 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-03 00:56:21.609428 | orchestrator | Saturday 03 May 2025 00:46:34 +0000 (0:00:01.907) 0:03:08.156 ********** 2025-05-03 00:56:21.609434 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.609440 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.609446 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.609453 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.609459 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.609465 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.609471 | orchestrator | 2025-05-03 00:56:21.609478 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-03 00:56:21.609485 | orchestrator | Saturday 03 May 2025 00:46:34 +0000 (0:00:00.616) 0:03:08.772 ********** 2025-05-03 00:56:21.609551 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.609561 | orchestrator | 2025-05-03 00:56:21.609568 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-03 00:56:21.609574 | orchestrator | Saturday 03 May 2025 00:46:36 +0000 (0:00:01.244) 0:03:10.017 ********** 2025-05-03 00:56:21.609580 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609587 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609593 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609599 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609605 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609611 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609618 | orchestrator | 2025-05-03 00:56:21.609624 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-03 00:56:21.609630 | orchestrator | Saturday 03 May 2025 00:46:36 +0000 (0:00:00.824) 0:03:10.842 ********** 2025-05-03 00:56:21.609636 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609642 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609648 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609654 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609660 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609666 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609673 | orchestrator | 2025-05-03 00:56:21.609730 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-03 00:56:21.609737 | orchestrator | Saturday 03 May 2025 00:46:37 +0000 (0:00:00.625) 0:03:11.468 ********** 2025-05-03 00:56:21.609744 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609750 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609756 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609762 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609768 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609774 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609780 | orchestrator | 2025-05-03 00:56:21.609787 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-03 00:56:21.609793 | orchestrator | Saturday 03 May 2025 00:46:38 +0000 (0:00:00.919) 0:03:12.388 ********** 2025-05-03 00:56:21.609799 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609805 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609811 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609817 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609823 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609829 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609835 | orchestrator | 2025-05-03 00:56:21.609841 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-03 00:56:21.609847 | orchestrator | Saturday 03 May 2025 00:46:39 +0000 (0:00:00.612) 0:03:13.000 ********** 2025-05-03 00:56:21.609854 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609860 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609866 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609872 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609878 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609884 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609890 | orchestrator | 2025-05-03 00:56:21.609896 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-03 00:56:21.609903 | orchestrator | Saturday 03 May 2025 00:46:39 +0000 (0:00:00.735) 0:03:13.735 ********** 2025-05-03 00:56:21.609909 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609915 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609921 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609927 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609936 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609943 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.609949 | orchestrator | 2025-05-03 00:56:21.609955 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-03 00:56:21.609961 | orchestrator | Saturday 03 May 2025 00:46:40 +0000 (0:00:00.749) 0:03:14.485 ********** 2025-05-03 00:56:21.609968 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.609974 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.609980 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.609986 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.609992 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.609998 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.610004 | orchestrator | 2025-05-03 00:56:21.610010 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-03 00:56:21.610034 | orchestrator | Saturday 03 May 2025 00:46:41 +0000 (0:00:00.785) 0:03:15.270 ********** 2025-05-03 00:56:21.610042 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.610048 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.610054 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.610061 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.610067 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.610073 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.610079 | orchestrator | 2025-05-03 00:56:21.610085 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-03 00:56:21.610091 | orchestrator | Saturday 03 May 2025 00:46:42 +0000 (0:00:01.142) 0:03:16.413 ********** 2025-05-03 00:56:21.610102 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.610109 | orchestrator | 2025-05-03 00:56:21.610115 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-03 00:56:21.610121 | orchestrator | Saturday 03 May 2025 00:46:43 +0000 (0:00:01.010) 0:03:17.424 ********** 2025-05-03 00:56:21.610127 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-03 00:56:21.610134 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-03 00:56:21.610140 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-03 00:56:21.610146 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-03 00:56:21.610152 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-03 00:56:21.610158 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-03 00:56:21.610208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-03 00:56:21.610217 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-03 00:56:21.610224 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-03 00:56:21.610231 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-03 00:56:21.610238 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-03 00:56:21.610245 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-03 00:56:21.610351 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-03 00:56:21.610381 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-03 00:56:21.610388 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-03 00:56:21.610395 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-03 00:56:21.610403 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-03 00:56:21.610410 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-03 00:56:21.610416 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-03 00:56:21.610423 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-03 00:56:21.610431 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-03 00:56:21.610438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-03 00:56:21.610445 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-03 00:56:21.610451 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-03 00:56:21.610458 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-03 00:56:21.610465 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-03 00:56:21.610472 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-03 00:56:21.610478 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-03 00:56:21.610485 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-03 00:56:21.610492 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-03 00:56:21.610499 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-03 00:56:21.610506 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-03 00:56:21.610514 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-03 00:56:21.610521 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-03 00:56:21.610528 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-03 00:56:21.610535 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-03 00:56:21.610547 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-03 00:56:21.610553 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-03 00:56:21.610560 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-03 00:56:21.610566 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-03 00:56:21.610576 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-03 00:56:21.610583 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-03 00:56:21.610589 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-03 00:56:21.610595 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-03 00:56:21.610647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-03 00:56:21.610671 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-03 00:56:21.610678 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-03 00:56:21.610684 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-03 00:56:21.610690 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-03 00:56:21.610696 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-03 00:56:21.610701 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-03 00:56:21.610707 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-03 00:56:21.610713 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-03 00:56:21.610719 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-03 00:56:21.610724 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-03 00:56:21.610730 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-03 00:56:21.610736 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-03 00:56:21.610742 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-03 00:56:21.610748 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-03 00:56:21.610754 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-03 00:56:21.610759 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-03 00:56:21.610765 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-03 00:56:21.610771 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-03 00:56:21.610777 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-03 00:56:21.610782 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-03 00:56:21.610788 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-03 00:56:21.610863 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-03 00:56:21.610872 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-03 00:56:21.610878 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-03 00:56:21.610884 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-03 00:56:21.610889 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-03 00:56:21.610895 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-03 00:56:21.610901 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-03 00:56:21.610907 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-03 00:56:21.610913 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-03 00:56:21.610919 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-03 00:56:21.610924 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-03 00:56:21.610930 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-03 00:56:21.610936 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-03 00:56:21.610947 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-03 00:56:21.610953 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-03 00:56:21.610959 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-03 00:56:21.610965 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-03 00:56:21.610970 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-03 00:56:21.610976 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-03 00:56:21.610982 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-03 00:56:21.610988 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-03 00:56:21.610994 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-03 00:56:21.610999 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-03 00:56:21.611005 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-03 00:56:21.611011 | orchestrator | 2025-05-03 00:56:21.611017 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-03 00:56:21.611026 | orchestrator | Saturday 03 May 2025 00:46:49 +0000 (0:00:06.041) 0:03:23.466 ********** 2025-05-03 00:56:21.611033 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611039 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611045 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611051 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.611057 | orchestrator | 2025-05-03 00:56:21.611063 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-03 00:56:21.611069 | orchestrator | Saturday 03 May 2025 00:46:50 +0000 (0:00:01.469) 0:03:24.936 ********** 2025-05-03 00:56:21.611075 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.611081 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.611087 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.611093 | orchestrator | 2025-05-03 00:56:21.611099 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-03 00:56:21.611104 | orchestrator | Saturday 03 May 2025 00:46:51 +0000 (0:00:00.985) 0:03:25.921 ********** 2025-05-03 00:56:21.611110 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.611116 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.611122 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.611128 | orchestrator | 2025-05-03 00:56:21.611134 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-03 00:56:21.611140 | orchestrator | Saturday 03 May 2025 00:46:53 +0000 (0:00:01.288) 0:03:27.210 ********** 2025-05-03 00:56:21.611145 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611151 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611157 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611163 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.611169 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.611175 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.611181 | orchestrator | 2025-05-03 00:56:21.611187 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-03 00:56:21.611192 | orchestrator | Saturday 03 May 2025 00:46:54 +0000 (0:00:00.938) 0:03:28.148 ********** 2025-05-03 00:56:21.611198 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611208 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611214 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611220 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.611226 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.611231 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.611237 | orchestrator | 2025-05-03 00:56:21.611243 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-03 00:56:21.611308 | orchestrator | Saturday 03 May 2025 00:46:54 +0000 (0:00:00.730) 0:03:28.879 ********** 2025-05-03 00:56:21.611317 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611323 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611329 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611335 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.611341 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.611347 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.611353 | orchestrator | 2025-05-03 00:56:21.611359 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-03 00:56:21.611365 | orchestrator | Saturday 03 May 2025 00:46:55 +0000 (0:00:01.000) 0:03:29.879 ********** 2025-05-03 00:56:21.611371 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611377 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611383 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611389 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.611395 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.611400 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.611406 | orchestrator | 2025-05-03 00:56:21.611412 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-03 00:56:21.611418 | orchestrator | Saturday 03 May 2025 00:46:56 +0000 (0:00:00.712) 0:03:30.592 ********** 2025-05-03 00:56:21.611424 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611430 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611436 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611442 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.611448 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.611454 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.611459 | orchestrator | 2025-05-03 00:56:21.611465 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-03 00:56:21.611471 | orchestrator | Saturday 03 May 2025 00:46:57 +0000 (0:00:01.137) 0:03:31.729 ********** 2025-05-03 00:56:21.611477 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611483 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611489 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611495 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.611501 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.611511 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.611517 | orchestrator | 2025-05-03 00:56:21.611523 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-03 00:56:21.611529 | orchestrator | Saturday 03 May 2025 00:46:58 +0000 (0:00:00.602) 0:03:32.331 ********** 2025-05-03 00:56:21.611535 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611541 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611547 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611553 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.611559 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.611564 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.611570 | orchestrator | 2025-05-03 00:56:21.611576 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-03 00:56:21.611582 | orchestrator | Saturday 03 May 2025 00:46:59 +0000 (0:00:00.881) 0:03:33.212 ********** 2025-05-03 00:56:21.611588 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611594 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611600 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611610 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.611616 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.611621 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.611627 | orchestrator | 2025-05-03 00:56:21.611633 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-03 00:56:21.611639 | orchestrator | Saturday 03 May 2025 00:46:59 +0000 (0:00:00.642) 0:03:33.855 ********** 2025-05-03 00:56:21.611645 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611680 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611687 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611706 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.611712 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.611718 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.611724 | orchestrator | 2025-05-03 00:56:21.611729 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-03 00:56:21.611744 | orchestrator | Saturday 03 May 2025 00:47:02 +0000 (0:00:02.446) 0:03:36.302 ********** 2025-05-03 00:56:21.611750 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611756 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611762 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611768 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.611781 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.611787 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.611793 | orchestrator | 2025-05-03 00:56:21.611799 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-03 00:56:21.611805 | orchestrator | Saturday 03 May 2025 00:47:02 +0000 (0:00:00.622) 0:03:36.924 ********** 2025-05-03 00:56:21.611811 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.611817 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.611823 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.611829 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.611838 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.611844 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.611850 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.611856 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.611862 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.611868 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.611874 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.611880 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.611886 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.611892 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.611898 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.611904 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.611909 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.611915 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.611921 | orchestrator | 2025-05-03 00:56:21.611969 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-03 00:56:21.611978 | orchestrator | Saturday 03 May 2025 00:47:03 +0000 (0:00:00.968) 0:03:37.893 ********** 2025-05-03 00:56:21.611985 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-03 00:56:21.611994 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-03 00:56:21.612001 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612008 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-03 00:56:21.612015 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-03 00:56:21.612021 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612028 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-03 00:56:21.612035 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-03 00:56:21.612050 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612057 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-03 00:56:21.612063 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-03 00:56:21.612070 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-03 00:56:21.612077 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-03 00:56:21.612083 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-03 00:56:21.612090 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-03 00:56:21.612097 | orchestrator | 2025-05-03 00:56:21.612103 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-03 00:56:21.612110 | orchestrator | Saturday 03 May 2025 00:47:05 +0000 (0:00:01.189) 0:03:39.082 ********** 2025-05-03 00:56:21.612117 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612123 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612130 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612137 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.612143 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.612149 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.612159 | orchestrator | 2025-05-03 00:56:21.612166 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-03 00:56:21.612173 | orchestrator | Saturday 03 May 2025 00:47:05 +0000 (0:00:00.732) 0:03:39.815 ********** 2025-05-03 00:56:21.612180 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612186 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612193 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612200 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.612207 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.612213 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.612220 | orchestrator | 2025-05-03 00:56:21.612227 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.612234 | orchestrator | Saturday 03 May 2025 00:47:06 +0000 (0:00:00.879) 0:03:40.695 ********** 2025-05-03 00:56:21.612241 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612247 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612270 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612277 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.612285 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.612291 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.612297 | orchestrator | 2025-05-03 00:56:21.612303 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.612309 | orchestrator | Saturday 03 May 2025 00:47:07 +0000 (0:00:00.661) 0:03:41.357 ********** 2025-05-03 00:56:21.612314 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612320 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612326 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612332 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.612338 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.612344 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.612349 | orchestrator | 2025-05-03 00:56:21.612358 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.612364 | orchestrator | Saturday 03 May 2025 00:47:08 +0000 (0:00:00.924) 0:03:42.281 ********** 2025-05-03 00:56:21.612370 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612376 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612381 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612387 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.612393 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.612399 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.612405 | orchestrator | 2025-05-03 00:56:21.612410 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.612416 | orchestrator | Saturday 03 May 2025 00:47:09 +0000 (0:00:00.772) 0:03:43.053 ********** 2025-05-03 00:56:21.612425 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612431 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612437 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612443 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.612451 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.612461 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.612468 | orchestrator | 2025-05-03 00:56:21.612474 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.612480 | orchestrator | Saturday 03 May 2025 00:47:10 +0000 (0:00:01.143) 0:03:44.197 ********** 2025-05-03 00:56:21.612486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.612492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.612498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.612504 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612510 | orchestrator | 2025-05-03 00:56:21.612515 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.612521 | orchestrator | Saturday 03 May 2025 00:47:10 +0000 (0:00:00.435) 0:03:44.633 ********** 2025-05-03 00:56:21.612527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.612533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.612577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.612585 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612591 | orchestrator | 2025-05-03 00:56:21.612597 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.612603 | orchestrator | Saturday 03 May 2025 00:47:11 +0000 (0:00:00.472) 0:03:45.105 ********** 2025-05-03 00:56:21.612608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.612614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.612620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.612626 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612632 | orchestrator | 2025-05-03 00:56:21.612638 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.612644 | orchestrator | Saturday 03 May 2025 00:47:11 +0000 (0:00:00.442) 0:03:45.547 ********** 2025-05-03 00:56:21.612650 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612656 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612661 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612667 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.612673 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.612679 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.612685 | orchestrator | 2025-05-03 00:56:21.612691 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.612696 | orchestrator | Saturday 03 May 2025 00:47:12 +0000 (0:00:00.703) 0:03:46.251 ********** 2025-05-03 00:56:21.612702 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.612708 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612714 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.612720 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612726 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.612732 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612737 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-03 00:56:21.612743 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-03 00:56:21.612749 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-03 00:56:21.612755 | orchestrator | 2025-05-03 00:56:21.612761 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.612767 | orchestrator | Saturday 03 May 2025 00:47:13 +0000 (0:00:01.240) 0:03:47.492 ********** 2025-05-03 00:56:21.612772 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612782 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612788 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612794 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.612800 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.612806 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.612812 | orchestrator | 2025-05-03 00:56:21.612818 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.612824 | orchestrator | Saturday 03 May 2025 00:47:14 +0000 (0:00:00.567) 0:03:48.060 ********** 2025-05-03 00:56:21.612829 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612835 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612841 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612847 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.612853 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.612858 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.612864 | orchestrator | 2025-05-03 00:56:21.612870 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.612876 | orchestrator | Saturday 03 May 2025 00:47:14 +0000 (0:00:00.633) 0:03:48.693 ********** 2025-05-03 00:56:21.612882 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.612888 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612894 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.612899 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612905 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.612911 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612917 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.612923 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.612932 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.612938 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.612944 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.612950 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.612956 | orchestrator | 2025-05-03 00:56:21.612962 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.612967 | orchestrator | Saturday 03 May 2025 00:47:15 +0000 (0:00:00.676) 0:03:49.370 ********** 2025-05-03 00:56:21.612973 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.612979 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.612985 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.612991 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.612997 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613003 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.613009 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.613015 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.613020 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.613026 | orchestrator | 2025-05-03 00:56:21.613032 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.613038 | orchestrator | Saturday 03 May 2025 00:47:16 +0000 (0:00:00.743) 0:03:50.113 ********** 2025-05-03 00:56:21.613044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.613050 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.613082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.613089 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.613146 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-03 00:56:21.613155 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-03 00:56:21.613161 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-03 00:56:21.613171 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.613177 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-03 00:56:21.613183 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-03 00:56:21.613189 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-03 00:56:21.613195 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.613201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.613207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:56:21.613212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.613218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:56:21.613224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.613230 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613236 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:56:21.613241 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:56:21.613247 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.613265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:56:21.613271 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:56:21.613277 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.613282 | orchestrator | 2025-05-03 00:56:21.613288 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-03 00:56:21.613294 | orchestrator | Saturday 03 May 2025 00:47:17 +0000 (0:00:01.698) 0:03:51.812 ********** 2025-05-03 00:56:21.613300 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.613306 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.613311 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.613317 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.613323 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.613329 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.613334 | orchestrator | 2025-05-03 00:56:21.613340 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-03 00:56:21.613346 | orchestrator | Saturday 03 May 2025 00:47:21 +0000 (0:00:03.701) 0:03:55.514 ********** 2025-05-03 00:56:21.613352 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.613358 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.613364 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.613369 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.613375 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.613381 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.613387 | orchestrator | 2025-05-03 00:56:21.613392 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-03 00:56:21.613398 | orchestrator | Saturday 03 May 2025 00:47:22 +0000 (0:00:01.136) 0:03:56.651 ********** 2025-05-03 00:56:21.613404 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613410 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.613416 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.613422 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.613428 | orchestrator | 2025-05-03 00:56:21.613434 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-03 00:56:21.613440 | orchestrator | Saturday 03 May 2025 00:47:23 +0000 (0:00:00.743) 0:03:57.394 ********** 2025-05-03 00:56:21.613446 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.613451 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.613457 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.613463 | orchestrator | 2025-05-03 00:56:21.613472 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-03 00:56:21.613479 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.613488 | orchestrator | 2025-05-03 00:56:21.613494 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-03 00:56:21.613500 | orchestrator | Saturday 03 May 2025 00:47:24 +0000 (0:00:00.824) 0:03:58.219 ********** 2025-05-03 00:56:21.613506 | orchestrator | 2025-05-03 00:56:21.613511 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-03 00:56:21.613517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.613523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.613529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.613535 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613540 | orchestrator | 2025-05-03 00:56:21.613546 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-03 00:56:21.613552 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.613558 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.613564 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.613570 | orchestrator | 2025-05-03 00:56:21.613575 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-03 00:56:21.613581 | orchestrator | Saturday 03 May 2025 00:47:25 +0000 (0:00:01.294) 0:03:59.513 ********** 2025-05-03 00:56:21.613587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.613596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.613602 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.613608 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.613614 | orchestrator | 2025-05-03 00:56:21.613619 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-03 00:56:21.613625 | orchestrator | Saturday 03 May 2025 00:47:26 +0000 (0:00:00.908) 0:04:00.422 ********** 2025-05-03 00:56:21.613631 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.613637 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.613677 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.613686 | orchestrator | 2025-05-03 00:56:21.613692 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-03 00:56:21.613698 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613704 | orchestrator | 2025-05-03 00:56:21.613710 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-03 00:56:21.613716 | orchestrator | Saturday 03 May 2025 00:47:26 +0000 (0:00:00.517) 0:04:00.940 ********** 2025-05-03 00:56:21.613722 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.613728 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.613733 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.613739 | orchestrator | 2025-05-03 00:56:21.613745 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-03 00:56:21.613751 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613757 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.613763 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.613769 | orchestrator | 2025-05-03 00:56:21.613775 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-03 00:56:21.613780 | orchestrator | Saturday 03 May 2025 00:47:27 +0000 (0:00:00.594) 0:04:01.535 ********** 2025-05-03 00:56:21.613786 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.613792 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.613798 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.613804 | orchestrator | 2025-05-03 00:56:21.613810 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-03 00:56:21.613816 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613825 | orchestrator | 2025-05-03 00:56:21.613831 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-03 00:56:21.613836 | orchestrator | Saturday 03 May 2025 00:47:28 +0000 (0:00:00.628) 0:04:02.163 ********** 2025-05-03 00:56:21.613842 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.613852 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.613858 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.613864 | orchestrator | 2025-05-03 00:56:21.613870 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-03 00:56:21.613875 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613881 | orchestrator | 2025-05-03 00:56:21.613887 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-03 00:56:21.613893 | orchestrator | Saturday 03 May 2025 00:47:28 +0000 (0:00:00.651) 0:04:02.815 ********** 2025-05-03 00:56:21.613899 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613905 | orchestrator | 2025-05-03 00:56:21.613910 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-03 00:56:21.613916 | orchestrator | Saturday 03 May 2025 00:47:28 +0000 (0:00:00.123) 0:04:02.938 ********** 2025-05-03 00:56:21.613922 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.613928 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.613934 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.613940 | orchestrator | 2025-05-03 00:56:21.613946 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-03 00:56:21.613951 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.613957 | orchestrator | 2025-05-03 00:56:21.613987 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-03 00:56:21.613994 | orchestrator | Saturday 03 May 2025 00:47:29 +0000 (0:00:00.612) 0:04:03.551 ********** 2025-05-03 00:56:21.614010 | orchestrator | 2025-05-03 00:56:21.614034 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-03 00:56:21.614042 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.614054 | orchestrator | 2025-05-03 00:56:21.614059 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-03 00:56:21.614065 | orchestrator | Saturday 03 May 2025 00:47:30 +0000 (0:00:00.695) 0:04:04.246 ********** 2025-05-03 00:56:21.614071 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.614077 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.614083 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.614089 | orchestrator | 2025-05-03 00:56:21.614095 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-03 00:56:21.614100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.614106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.614112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.614118 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614124 | orchestrator | 2025-05-03 00:56:21.614130 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-03 00:56:21.614139 | orchestrator | Saturday 03 May 2025 00:47:31 +0000 (0:00:00.801) 0:04:05.047 ********** 2025-05-03 00:56:21.614145 | orchestrator | 2025-05-03 00:56:21.614151 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-03 00:56:21.614157 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614163 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.614169 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.614174 | orchestrator | 2025-05-03 00:56:21.614180 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-03 00:56:21.614186 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.614192 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.614198 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.614203 | orchestrator | 2025-05-03 00:56:21.614209 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-03 00:56:21.614215 | orchestrator | Saturday 03 May 2025 00:47:32 +0000 (0:00:01.187) 0:04:06.235 ********** 2025-05-03 00:56:21.614221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.614231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.614237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.614242 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.614248 | orchestrator | 2025-05-03 00:56:21.614287 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-03 00:56:21.614293 | orchestrator | Saturday 03 May 2025 00:47:33 +0000 (0:00:00.976) 0:04:07.211 ********** 2025-05-03 00:56:21.614342 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.614351 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.614358 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.614365 | orchestrator | 2025-05-03 00:56:21.614371 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-03 00:56:21.614377 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614384 | orchestrator | 2025-05-03 00:56:21.614391 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-03 00:56:21.614398 | orchestrator | Saturday 03 May 2025 00:47:34 +0000 (0:00:01.064) 0:04:08.276 ********** 2025-05-03 00:56:21.614404 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.614411 | orchestrator | 2025-05-03 00:56:21.614417 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-03 00:56:21.614424 | orchestrator | Saturday 03 May 2025 00:47:34 +0000 (0:00:00.653) 0:04:08.930 ********** 2025-05-03 00:56:21.614430 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.614436 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.614442 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.614447 | orchestrator | 2025-05-03 00:56:21.614453 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-03 00:56:21.614459 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.614465 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.614470 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.614476 | orchestrator | 2025-05-03 00:56:21.614482 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-03 00:56:21.614488 | orchestrator | Saturday 03 May 2025 00:47:36 +0000 (0:00:01.209) 0:04:10.139 ********** 2025-05-03 00:56:21.614494 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.614499 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.614505 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.614511 | orchestrator | 2025-05-03 00:56:21.614517 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-03 00:56:21.614523 | orchestrator | Saturday 03 May 2025 00:47:37 +0000 (0:00:01.417) 0:04:11.557 ********** 2025-05-03 00:56:21.614528 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.614534 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.614540 | orchestrator | 2025-05-03 00:56:21.614546 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-03 00:56:21.614552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.614558 | orchestrator | 2025-05-03 00:56:21.614564 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-03 00:56:21.614569 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.614575 | orchestrator | 2025-05-03 00:56:21.614581 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-03 00:56:21.614587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.614593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.614599 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614604 | orchestrator | 2025-05-03 00:56:21.614610 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-03 00:56:21.614616 | orchestrator | Saturday 03 May 2025 00:47:38 +0000 (0:00:01.246) 0:04:12.803 ********** 2025-05-03 00:56:21.614622 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.614628 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.614640 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.614646 | orchestrator | 2025-05-03 00:56:21.614652 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-03 00:56:21.614657 | orchestrator | Saturday 03 May 2025 00:47:39 +0000 (0:00:01.082) 0:04:13.885 ********** 2025-05-03 00:56:21.614663 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.614669 | orchestrator | 2025-05-03 00:56:21.614675 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-03 00:56:21.614680 | orchestrator | Saturday 03 May 2025 00:47:40 +0000 (0:00:00.607) 0:04:14.493 ********** 2025-05-03 00:56:21.614686 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.614692 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.614698 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.614704 | orchestrator | 2025-05-03 00:56:21.614710 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-03 00:56:21.614715 | orchestrator | Saturday 03 May 2025 00:47:41 +0000 (0:00:00.552) 0:04:15.045 ********** 2025-05-03 00:56:21.614721 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.614727 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.614733 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.614738 | orchestrator | 2025-05-03 00:56:21.614744 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-03 00:56:21.614750 | orchestrator | Saturday 03 May 2025 00:47:42 +0000 (0:00:01.204) 0:04:16.250 ********** 2025-05-03 00:56:21.614756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.614762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.614767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.614773 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614779 | orchestrator | 2025-05-03 00:56:21.614785 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-03 00:56:21.614791 | orchestrator | Saturday 03 May 2025 00:47:42 +0000 (0:00:00.646) 0:04:16.897 ********** 2025-05-03 00:56:21.614797 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.614802 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.614808 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.614814 | orchestrator | 2025-05-03 00:56:21.614852 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-03 00:56:21.614860 | orchestrator | Saturday 03 May 2025 00:47:43 +0000 (0:00:00.473) 0:04:17.371 ********** 2025-05-03 00:56:21.614866 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614875 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.614881 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.614887 | orchestrator | 2025-05-03 00:56:21.614893 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-03 00:56:21.614936 | orchestrator | Saturday 03 May 2025 00:47:44 +0000 (0:00:00.695) 0:04:18.066 ********** 2025-05-03 00:56:21.614945 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614951 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.614956 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.614962 | orchestrator | 2025-05-03 00:56:21.614968 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-03 00:56:21.614974 | orchestrator | Saturday 03 May 2025 00:47:44 +0000 (0:00:00.373) 0:04:18.440 ********** 2025-05-03 00:56:21.614980 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.614986 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.614991 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.614997 | orchestrator | 2025-05-03 00:56:21.615003 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-03 00:56:21.615009 | orchestrator | Saturday 03 May 2025 00:47:44 +0000 (0:00:00.386) 0:04:18.826 ********** 2025-05-03 00:56:21.615015 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.615025 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.615030 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.615036 | orchestrator | 2025-05-03 00:56:21.615042 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-03 00:56:21.615048 | orchestrator | 2025-05-03 00:56:21.615054 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-03 00:56:21.615059 | orchestrator | Saturday 03 May 2025 00:47:47 +0000 (0:00:02.545) 0:04:21.372 ********** 2025-05-03 00:56:21.615065 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.615071 | orchestrator | 2025-05-03 00:56:21.615077 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-03 00:56:21.615083 | orchestrator | Saturday 03 May 2025 00:47:47 +0000 (0:00:00.599) 0:04:21.972 ********** 2025-05-03 00:56:21.615089 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.615094 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.615100 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.615106 | orchestrator | 2025-05-03 00:56:21.615112 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-03 00:56:21.615118 | orchestrator | Saturday 03 May 2025 00:47:48 +0000 (0:00:00.728) 0:04:22.701 ********** 2025-05-03 00:56:21.615124 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615129 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615135 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615141 | orchestrator | 2025-05-03 00:56:21.615147 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-03 00:56:21.615153 | orchestrator | Saturday 03 May 2025 00:47:49 +0000 (0:00:00.582) 0:04:23.283 ********** 2025-05-03 00:56:21.615159 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615164 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615170 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615176 | orchestrator | 2025-05-03 00:56:21.615182 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-03 00:56:21.615188 | orchestrator | Saturday 03 May 2025 00:47:49 +0000 (0:00:00.413) 0:04:23.697 ********** 2025-05-03 00:56:21.615194 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615199 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615205 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615211 | orchestrator | 2025-05-03 00:56:21.615217 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-03 00:56:21.615223 | orchestrator | Saturday 03 May 2025 00:47:50 +0000 (0:00:00.369) 0:04:24.066 ********** 2025-05-03 00:56:21.615228 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.615234 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.615240 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.615246 | orchestrator | 2025-05-03 00:56:21.615263 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-03 00:56:21.615270 | orchestrator | Saturday 03 May 2025 00:47:50 +0000 (0:00:00.771) 0:04:24.838 ********** 2025-05-03 00:56:21.615276 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615282 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615287 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615293 | orchestrator | 2025-05-03 00:56:21.615299 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-03 00:56:21.615305 | orchestrator | Saturday 03 May 2025 00:47:51 +0000 (0:00:00.659) 0:04:25.497 ********** 2025-05-03 00:56:21.615311 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615316 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615322 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615328 | orchestrator | 2025-05-03 00:56:21.615334 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-03 00:56:21.615340 | orchestrator | Saturday 03 May 2025 00:47:51 +0000 (0:00:00.333) 0:04:25.831 ********** 2025-05-03 00:56:21.615345 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615355 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615381 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615388 | orchestrator | 2025-05-03 00:56:21.615393 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-03 00:56:21.615413 | orchestrator | Saturday 03 May 2025 00:47:52 +0000 (0:00:00.347) 0:04:26.178 ********** 2025-05-03 00:56:21.615419 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615425 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615430 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615436 | orchestrator | 2025-05-03 00:56:21.615442 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-03 00:56:21.615448 | orchestrator | Saturday 03 May 2025 00:47:52 +0000 (0:00:00.391) 0:04:26.570 ********** 2025-05-03 00:56:21.615454 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615459 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615485 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615492 | orchestrator | 2025-05-03 00:56:21.615497 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-03 00:56:21.615507 | orchestrator | Saturday 03 May 2025 00:47:53 +0000 (0:00:00.736) 0:04:27.306 ********** 2025-05-03 00:56:21.615513 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.615519 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.615524 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.615530 | orchestrator | 2025-05-03 00:56:21.615573 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-03 00:56:21.615582 | orchestrator | Saturday 03 May 2025 00:47:54 +0000 (0:00:00.805) 0:04:28.112 ********** 2025-05-03 00:56:21.615588 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615594 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615600 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615605 | orchestrator | 2025-05-03 00:56:21.615611 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-03 00:56:21.615617 | orchestrator | Saturday 03 May 2025 00:47:54 +0000 (0:00:00.366) 0:04:28.479 ********** 2025-05-03 00:56:21.615623 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.615629 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.615635 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.615644 | orchestrator | 2025-05-03 00:56:21.615650 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-03 00:56:21.615655 | orchestrator | Saturday 03 May 2025 00:47:55 +0000 (0:00:00.732) 0:04:29.212 ********** 2025-05-03 00:56:21.615661 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615667 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615673 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615679 | orchestrator | 2025-05-03 00:56:21.615685 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-03 00:56:21.615691 | orchestrator | Saturday 03 May 2025 00:47:55 +0000 (0:00:00.406) 0:04:29.618 ********** 2025-05-03 00:56:21.615696 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615702 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615708 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615714 | orchestrator | 2025-05-03 00:56:21.615720 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-03 00:56:21.615726 | orchestrator | Saturday 03 May 2025 00:47:55 +0000 (0:00:00.350) 0:04:29.968 ********** 2025-05-03 00:56:21.615731 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615737 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615743 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615749 | orchestrator | 2025-05-03 00:56:21.615755 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-03 00:56:21.615761 | orchestrator | Saturday 03 May 2025 00:47:56 +0000 (0:00:00.416) 0:04:30.384 ********** 2025-05-03 00:56:21.615767 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615773 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615783 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615789 | orchestrator | 2025-05-03 00:56:21.615795 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-03 00:56:21.615801 | orchestrator | Saturday 03 May 2025 00:47:57 +0000 (0:00:00.651) 0:04:31.036 ********** 2025-05-03 00:56:21.615807 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615813 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615819 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615824 | orchestrator | 2025-05-03 00:56:21.615830 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-03 00:56:21.615836 | orchestrator | Saturday 03 May 2025 00:47:57 +0000 (0:00:00.434) 0:04:31.471 ********** 2025-05-03 00:56:21.615842 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.615847 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.615853 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.615859 | orchestrator | 2025-05-03 00:56:21.615865 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-03 00:56:21.615871 | orchestrator | Saturday 03 May 2025 00:47:57 +0000 (0:00:00.409) 0:04:31.880 ********** 2025-05-03 00:56:21.615877 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.615882 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.615888 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.615894 | orchestrator | 2025-05-03 00:56:21.615900 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-03 00:56:21.615905 | orchestrator | Saturday 03 May 2025 00:47:58 +0000 (0:00:00.358) 0:04:32.239 ********** 2025-05-03 00:56:21.615911 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615917 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615923 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615929 | orchestrator | 2025-05-03 00:56:21.615934 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-03 00:56:21.615940 | orchestrator | Saturday 03 May 2025 00:47:58 +0000 (0:00:00.581) 0:04:32.821 ********** 2025-05-03 00:56:21.615946 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615952 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615957 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615963 | orchestrator | 2025-05-03 00:56:21.615969 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-03 00:56:21.615975 | orchestrator | Saturday 03 May 2025 00:47:59 +0000 (0:00:00.355) 0:04:33.176 ********** 2025-05-03 00:56:21.615981 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.615987 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.615992 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.615998 | orchestrator | 2025-05-03 00:56:21.616004 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-03 00:56:21.616010 | orchestrator | Saturday 03 May 2025 00:47:59 +0000 (0:00:00.292) 0:04:33.469 ********** 2025-05-03 00:56:21.616016 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616021 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616027 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616033 | orchestrator | 2025-05-03 00:56:21.616039 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-03 00:56:21.616044 | orchestrator | Saturday 03 May 2025 00:47:59 +0000 (0:00:00.296) 0:04:33.765 ********** 2025-05-03 00:56:21.616050 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616056 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616062 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616068 | orchestrator | 2025-05-03 00:56:21.616073 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-03 00:56:21.616079 | orchestrator | Saturday 03 May 2025 00:48:00 +0000 (0:00:00.452) 0:04:34.218 ********** 2025-05-03 00:56:21.616085 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616091 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616097 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616106 | orchestrator | 2025-05-03 00:56:21.616144 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-03 00:56:21.616155 | orchestrator | Saturday 03 May 2025 00:48:00 +0000 (0:00:00.302) 0:04:34.521 ********** 2025-05-03 00:56:21.616162 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616168 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616173 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616179 | orchestrator | 2025-05-03 00:56:21.616185 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-03 00:56:21.616191 | orchestrator | Saturday 03 May 2025 00:48:00 +0000 (0:00:00.309) 0:04:34.831 ********** 2025-05-03 00:56:21.616197 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616202 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616208 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616214 | orchestrator | 2025-05-03 00:56:21.616220 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-03 00:56:21.616226 | orchestrator | Saturday 03 May 2025 00:48:01 +0000 (0:00:00.258) 0:04:35.089 ********** 2025-05-03 00:56:21.616232 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616237 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616243 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616249 | orchestrator | 2025-05-03 00:56:21.616294 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-03 00:56:21.616300 | orchestrator | Saturday 03 May 2025 00:48:01 +0000 (0:00:00.429) 0:04:35.519 ********** 2025-05-03 00:56:21.616306 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616312 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616318 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616327 | orchestrator | 2025-05-03 00:56:21.616333 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-03 00:56:21.616338 | orchestrator | Saturday 03 May 2025 00:48:01 +0000 (0:00:00.259) 0:04:35.778 ********** 2025-05-03 00:56:21.616344 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616350 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616356 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616362 | orchestrator | 2025-05-03 00:56:21.616368 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-03 00:56:21.616373 | orchestrator | Saturday 03 May 2025 00:48:02 +0000 (0:00:00.297) 0:04:36.076 ********** 2025-05-03 00:56:21.616379 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616385 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616391 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616396 | orchestrator | 2025-05-03 00:56:21.616402 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-03 00:56:21.616408 | orchestrator | Saturday 03 May 2025 00:48:02 +0000 (0:00:00.284) 0:04:36.360 ********** 2025-05-03 00:56:21.616414 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.616420 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.616425 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616431 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.616437 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.616443 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616448 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.616454 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.616460 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616466 | orchestrator | 2025-05-03 00:56:21.616471 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-03 00:56:21.616477 | orchestrator | Saturday 03 May 2025 00:48:02 +0000 (0:00:00.616) 0:04:36.976 ********** 2025-05-03 00:56:21.616483 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-03 00:56:21.616489 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-03 00:56:21.616498 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616504 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-03 00:56:21.616509 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-03 00:56:21.616515 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616521 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-03 00:56:21.616527 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-03 00:56:21.616553 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616560 | orchestrator | 2025-05-03 00:56:21.616578 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-03 00:56:21.616584 | orchestrator | Saturday 03 May 2025 00:48:03 +0000 (0:00:00.338) 0:04:37.315 ********** 2025-05-03 00:56:21.616590 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616596 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616602 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616608 | orchestrator | 2025-05-03 00:56:21.616614 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-03 00:56:21.616619 | orchestrator | Saturday 03 May 2025 00:48:03 +0000 (0:00:00.328) 0:04:37.643 ********** 2025-05-03 00:56:21.616625 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616631 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616637 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616643 | orchestrator | 2025-05-03 00:56:21.616649 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.616654 | orchestrator | Saturday 03 May 2025 00:48:03 +0000 (0:00:00.327) 0:04:37.971 ********** 2025-05-03 00:56:21.616660 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616666 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616691 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616697 | orchestrator | 2025-05-03 00:56:21.616703 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.616709 | orchestrator | Saturday 03 May 2025 00:48:04 +0000 (0:00:00.488) 0:04:38.459 ********** 2025-05-03 00:56:21.616715 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616765 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616774 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616780 | orchestrator | 2025-05-03 00:56:21.616786 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.616792 | orchestrator | Saturday 03 May 2025 00:48:04 +0000 (0:00:00.293) 0:04:38.753 ********** 2025-05-03 00:56:21.616797 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616803 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616809 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616815 | orchestrator | 2025-05-03 00:56:21.616821 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.616826 | orchestrator | Saturday 03 May 2025 00:48:05 +0000 (0:00:00.357) 0:04:39.110 ********** 2025-05-03 00:56:21.616832 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616838 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.616844 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.616849 | orchestrator | 2025-05-03 00:56:21.616855 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.616861 | orchestrator | Saturday 03 May 2025 00:48:05 +0000 (0:00:00.354) 0:04:39.464 ********** 2025-05-03 00:56:21.616867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.616873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.616879 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.616885 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616891 | orchestrator | 2025-05-03 00:56:21.616896 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.616907 | orchestrator | Saturday 03 May 2025 00:48:06 +0000 (0:00:00.828) 0:04:40.292 ********** 2025-05-03 00:56:21.616913 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.616918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.616924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.616930 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616936 | orchestrator | 2025-05-03 00:56:21.616941 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.616947 | orchestrator | Saturday 03 May 2025 00:48:06 +0000 (0:00:00.379) 0:04:40.672 ********** 2025-05-03 00:56:21.616953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.616959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.616965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.616971 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616976 | orchestrator | 2025-05-03 00:56:21.616982 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.616988 | orchestrator | Saturday 03 May 2025 00:48:07 +0000 (0:00:00.340) 0:04:41.012 ********** 2025-05-03 00:56:21.616994 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.616999 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617005 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617011 | orchestrator | 2025-05-03 00:56:21.617017 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.617028 | orchestrator | Saturday 03 May 2025 00:48:07 +0000 (0:00:00.267) 0:04:41.280 ********** 2025-05-03 00:56:21.617034 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.617040 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617045 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.617051 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617057 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.617063 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617069 | orchestrator | 2025-05-03 00:56:21.617074 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.617080 | orchestrator | Saturday 03 May 2025 00:48:07 +0000 (0:00:00.455) 0:04:41.736 ********** 2025-05-03 00:56:21.617086 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617092 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617097 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617103 | orchestrator | 2025-05-03 00:56:21.617109 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.617115 | orchestrator | Saturday 03 May 2025 00:48:08 +0000 (0:00:00.421) 0:04:42.158 ********** 2025-05-03 00:56:21.617120 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617126 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617132 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617138 | orchestrator | 2025-05-03 00:56:21.617144 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.617149 | orchestrator | Saturday 03 May 2025 00:48:08 +0000 (0:00:00.288) 0:04:42.447 ********** 2025-05-03 00:56:21.617155 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.617161 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617167 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.617172 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617178 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.617184 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617190 | orchestrator | 2025-05-03 00:56:21.617196 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.617201 | orchestrator | Saturday 03 May 2025 00:48:08 +0000 (0:00:00.365) 0:04:42.812 ********** 2025-05-03 00:56:21.617212 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617218 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617224 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617232 | orchestrator | 2025-05-03 00:56:21.617238 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.617244 | orchestrator | Saturday 03 May 2025 00:48:09 +0000 (0:00:00.298) 0:04:43.110 ********** 2025-05-03 00:56:21.617264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.617271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.617311 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.617320 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617326 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-03 00:56:21.617332 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-03 00:56:21.617338 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-03 00:56:21.617343 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-03 00:56:21.617358 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-03 00:56:21.617364 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-03 00:56:21.617370 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617376 | orchestrator | 2025-05-03 00:56:21.617382 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-03 00:56:21.617388 | orchestrator | Saturday 03 May 2025 00:48:09 +0000 (0:00:00.774) 0:04:43.885 ********** 2025-05-03 00:56:21.617394 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617399 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617405 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617411 | orchestrator | 2025-05-03 00:56:21.617417 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-03 00:56:21.617423 | orchestrator | Saturday 03 May 2025 00:48:10 +0000 (0:00:00.505) 0:04:44.390 ********** 2025-05-03 00:56:21.617429 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617435 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617440 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617446 | orchestrator | 2025-05-03 00:56:21.617452 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-03 00:56:21.617458 | orchestrator | Saturday 03 May 2025 00:48:11 +0000 (0:00:00.691) 0:04:45.082 ********** 2025-05-03 00:56:21.617464 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617469 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617475 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617481 | orchestrator | 2025-05-03 00:56:21.617487 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-03 00:56:21.617493 | orchestrator | Saturday 03 May 2025 00:48:11 +0000 (0:00:00.527) 0:04:45.610 ********** 2025-05-03 00:56:21.617499 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617504 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617510 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617516 | orchestrator | 2025-05-03 00:56:21.617522 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-03 00:56:21.617528 | orchestrator | Saturday 03 May 2025 00:48:12 +0000 (0:00:00.675) 0:04:46.285 ********** 2025-05-03 00:56:21.617534 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.617539 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.617545 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.617551 | orchestrator | 2025-05-03 00:56:21.617557 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-03 00:56:21.617563 | orchestrator | Saturday 03 May 2025 00:48:12 +0000 (0:00:00.298) 0:04:46.584 ********** 2025-05-03 00:56:21.617568 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.617578 | orchestrator | 2025-05-03 00:56:21.617584 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-03 00:56:21.617590 | orchestrator | Saturday 03 May 2025 00:48:13 +0000 (0:00:00.713) 0:04:47.298 ********** 2025-05-03 00:56:21.617596 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617601 | orchestrator | 2025-05-03 00:56:21.617607 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-03 00:56:21.617613 | orchestrator | Saturday 03 May 2025 00:48:13 +0000 (0:00:00.137) 0:04:47.435 ********** 2025-05-03 00:56:21.617619 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-03 00:56:21.617625 | orchestrator | 2025-05-03 00:56:21.617630 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-03 00:56:21.617636 | orchestrator | Saturday 03 May 2025 00:48:14 +0000 (0:00:00.808) 0:04:48.243 ********** 2025-05-03 00:56:21.617642 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.617648 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.617654 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.617659 | orchestrator | 2025-05-03 00:56:21.617665 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-03 00:56:21.617671 | orchestrator | Saturday 03 May 2025 00:48:14 +0000 (0:00:00.401) 0:04:48.644 ********** 2025-05-03 00:56:21.617676 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.617707 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.617715 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.617731 | orchestrator | 2025-05-03 00:56:21.617738 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-03 00:56:21.617746 | orchestrator | Saturday 03 May 2025 00:48:15 +0000 (0:00:00.423) 0:04:49.068 ********** 2025-05-03 00:56:21.617752 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.617758 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.617764 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.617770 | orchestrator | 2025-05-03 00:56:21.617776 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-03 00:56:21.617781 | orchestrator | Saturday 03 May 2025 00:48:16 +0000 (0:00:01.285) 0:04:50.353 ********** 2025-05-03 00:56:21.617787 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.617800 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.617806 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.617812 | orchestrator | 2025-05-03 00:56:21.617818 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-03 00:56:21.617824 | orchestrator | Saturday 03 May 2025 00:48:17 +0000 (0:00:00.772) 0:04:51.126 ********** 2025-05-03 00:56:21.617830 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.617835 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.617841 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.617847 | orchestrator | 2025-05-03 00:56:21.617853 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-03 00:56:21.617858 | orchestrator | Saturday 03 May 2025 00:48:17 +0000 (0:00:00.746) 0:04:51.872 ********** 2025-05-03 00:56:21.617882 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.617889 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.617895 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.617901 | orchestrator | 2025-05-03 00:56:21.617907 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-03 00:56:21.617912 | orchestrator | Saturday 03 May 2025 00:48:18 +0000 (0:00:00.689) 0:04:52.562 ********** 2025-05-03 00:56:21.617918 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.617924 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.617930 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.617936 | orchestrator | 2025-05-03 00:56:21.617941 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-03 00:56:21.617947 | orchestrator | Saturday 03 May 2025 00:48:19 +0000 (0:00:00.577) 0:04:53.140 ********** 2025-05-03 00:56:21.617954 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.617965 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.617972 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.617979 | orchestrator | 2025-05-03 00:56:21.617985 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-03 00:56:21.617992 | orchestrator | Saturday 03 May 2025 00:48:19 +0000 (0:00:00.337) 0:04:53.478 ********** 2025-05-03 00:56:21.617998 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.618005 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.618011 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.618036 | orchestrator | 2025-05-03 00:56:21.618043 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-03 00:56:21.618049 | orchestrator | Saturday 03 May 2025 00:48:19 +0000 (0:00:00.357) 0:04:53.835 ********** 2025-05-03 00:56:21.618056 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.618063 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.618069 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.618076 | orchestrator | 2025-05-03 00:56:21.618082 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-03 00:56:21.618089 | orchestrator | Saturday 03 May 2025 00:48:20 +0000 (0:00:00.370) 0:04:54.205 ********** 2025-05-03 00:56:21.618095 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.618102 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.618108 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.618129 | orchestrator | 2025-05-03 00:56:21.618136 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-03 00:56:21.618142 | orchestrator | Saturday 03 May 2025 00:48:21 +0000 (0:00:01.716) 0:04:55.922 ********** 2025-05-03 00:56:21.618149 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.618156 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.618162 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.618169 | orchestrator | 2025-05-03 00:56:21.618175 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-03 00:56:21.618182 | orchestrator | Saturday 03 May 2025 00:48:22 +0000 (0:00:00.403) 0:04:56.325 ********** 2025-05-03 00:56:21.618188 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.618195 | orchestrator | 2025-05-03 00:56:21.618202 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-03 00:56:21.618208 | orchestrator | Saturday 03 May 2025 00:48:23 +0000 (0:00:00.850) 0:04:57.176 ********** 2025-05-03 00:56:21.618215 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.618222 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.618228 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.618235 | orchestrator | 2025-05-03 00:56:21.618242 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-03 00:56:21.618248 | orchestrator | Saturday 03 May 2025 00:48:23 +0000 (0:00:00.426) 0:04:57.603 ********** 2025-05-03 00:56:21.618266 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.618273 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.618280 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.618286 | orchestrator | 2025-05-03 00:56:21.618293 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-03 00:56:21.618300 | orchestrator | Saturday 03 May 2025 00:48:23 +0000 (0:00:00.328) 0:04:57.932 ********** 2025-05-03 00:56:21.618306 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.618312 | orchestrator | 2025-05-03 00:56:21.618318 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-03 00:56:21.618324 | orchestrator | Saturday 03 May 2025 00:48:24 +0000 (0:00:00.873) 0:04:58.805 ********** 2025-05-03 00:56:21.618330 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.618335 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.618341 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.618347 | orchestrator | 2025-05-03 00:56:21.618353 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-03 00:56:21.618362 | orchestrator | Saturday 03 May 2025 00:48:26 +0000 (0:00:01.382) 0:05:00.187 ********** 2025-05-03 00:56:21.618368 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.618374 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.618380 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.618385 | orchestrator | 2025-05-03 00:56:21.618391 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-03 00:56:21.618399 | orchestrator | Saturday 03 May 2025 00:48:27 +0000 (0:00:01.230) 0:05:01.418 ********** 2025-05-03 00:56:21.618405 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.618411 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.618417 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.618423 | orchestrator | 2025-05-03 00:56:21.618428 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-03 00:56:21.618434 | orchestrator | Saturday 03 May 2025 00:48:29 +0000 (0:00:01.912) 0:05:03.330 ********** 2025-05-03 00:56:21.618440 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.618446 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.618452 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.618458 | orchestrator | 2025-05-03 00:56:21.618463 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-03 00:56:21.618469 | orchestrator | Saturday 03 May 2025 00:48:31 +0000 (0:00:01.954) 0:05:05.285 ********** 2025-05-03 00:56:21.618491 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.618498 | orchestrator | 2025-05-03 00:56:21.618504 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-03 00:56:21.618510 | orchestrator | Saturday 03 May 2025 00:48:31 +0000 (0:00:00.631) 0:05:05.916 ********** 2025-05-03 00:56:21.618516 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-03 00:56:21.618522 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.618527 | orchestrator | 2025-05-03 00:56:21.618533 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-03 00:56:21.618539 | orchestrator | Saturday 03 May 2025 00:48:53 +0000 (0:00:21.459) 0:05:27.376 ********** 2025-05-03 00:56:21.618545 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.618551 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.618557 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.618562 | orchestrator | 2025-05-03 00:56:21.618568 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-03 00:56:21.618574 | orchestrator | Saturday 03 May 2025 00:49:01 +0000 (0:00:07.768) 0:05:35.144 ********** 2025-05-03 00:56:21.618580 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.618586 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.618591 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.618597 | orchestrator | 2025-05-03 00:56:21.618603 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-03 00:56:21.618609 | orchestrator | Saturday 03 May 2025 00:49:02 +0000 (0:00:01.148) 0:05:36.293 ********** 2025-05-03 00:56:21.618615 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.618620 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.618626 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.618632 | orchestrator | 2025-05-03 00:56:21.618638 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-03 00:56:21.618644 | orchestrator | Saturday 03 May 2025 00:49:03 +0000 (0:00:00.708) 0:05:37.001 ********** 2025-05-03 00:56:21.618649 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.618655 | orchestrator | 2025-05-03 00:56:21.618661 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-03 00:56:21.618667 | orchestrator | Saturday 03 May 2025 00:49:03 +0000 (0:00:00.741) 0:05:37.743 ********** 2025-05-03 00:56:21.618676 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.618682 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.618688 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.618694 | orchestrator | 2025-05-03 00:56:21.618700 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-03 00:56:21.618706 | orchestrator | Saturday 03 May 2025 00:49:04 +0000 (0:00:00.368) 0:05:38.112 ********** 2025-05-03 00:56:21.618711 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.618717 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.618723 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.618729 | orchestrator | 2025-05-03 00:56:21.618735 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-03 00:56:21.618740 | orchestrator | Saturday 03 May 2025 00:49:05 +0000 (0:00:01.228) 0:05:39.340 ********** 2025-05-03 00:56:21.618746 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.618752 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.618758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.618764 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.618769 | orchestrator | 2025-05-03 00:56:21.618775 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-03 00:56:21.618781 | orchestrator | Saturday 03 May 2025 00:49:06 +0000 (0:00:01.266) 0:05:40.606 ********** 2025-05-03 00:56:21.618787 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.618793 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.618799 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.618804 | orchestrator | 2025-05-03 00:56:21.618810 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-03 00:56:21.618816 | orchestrator | Saturday 03 May 2025 00:49:07 +0000 (0:00:00.458) 0:05:41.065 ********** 2025-05-03 00:56:21.618822 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.618828 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.618833 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.618839 | orchestrator | 2025-05-03 00:56:21.618845 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-03 00:56:21.618851 | orchestrator | 2025-05-03 00:56:21.618856 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-03 00:56:21.618862 | orchestrator | Saturday 03 May 2025 00:49:09 +0000 (0:00:02.247) 0:05:43.313 ********** 2025-05-03 00:56:21.618868 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.618874 | orchestrator | 2025-05-03 00:56:21.618880 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-03 00:56:21.618886 | orchestrator | Saturday 03 May 2025 00:49:10 +0000 (0:00:00.823) 0:05:44.136 ********** 2025-05-03 00:56:21.618891 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.618897 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.618903 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.618909 | orchestrator | 2025-05-03 00:56:21.618915 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-03 00:56:21.618920 | orchestrator | Saturday 03 May 2025 00:49:10 +0000 (0:00:00.749) 0:05:44.886 ********** 2025-05-03 00:56:21.618926 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.618932 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.618938 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.618946 | orchestrator | 2025-05-03 00:56:21.618955 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-03 00:56:21.618961 | orchestrator | Saturday 03 May 2025 00:49:11 +0000 (0:00:00.312) 0:05:45.198 ********** 2025-05-03 00:56:21.618967 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.618987 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.618994 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619000 | orchestrator | 2025-05-03 00:56:21.619006 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-03 00:56:21.619015 | orchestrator | Saturday 03 May 2025 00:49:11 +0000 (0:00:00.556) 0:05:45.755 ********** 2025-05-03 00:56:21.619021 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619027 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619033 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619038 | orchestrator | 2025-05-03 00:56:21.619044 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-03 00:56:21.619050 | orchestrator | Saturday 03 May 2025 00:49:12 +0000 (0:00:00.345) 0:05:46.101 ********** 2025-05-03 00:56:21.619056 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.619062 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.619068 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.619073 | orchestrator | 2025-05-03 00:56:21.619079 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-03 00:56:21.619085 | orchestrator | Saturday 03 May 2025 00:49:12 +0000 (0:00:00.735) 0:05:46.836 ********** 2025-05-03 00:56:21.619091 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619097 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619102 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619108 | orchestrator | 2025-05-03 00:56:21.619114 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-03 00:56:21.619120 | orchestrator | Saturday 03 May 2025 00:49:13 +0000 (0:00:00.329) 0:05:47.166 ********** 2025-05-03 00:56:21.619126 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619131 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619137 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619143 | orchestrator | 2025-05-03 00:56:21.619149 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-03 00:56:21.619155 | orchestrator | Saturday 03 May 2025 00:49:13 +0000 (0:00:00.588) 0:05:47.754 ********** 2025-05-03 00:56:21.619161 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619166 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619172 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619178 | orchestrator | 2025-05-03 00:56:21.619184 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-03 00:56:21.619190 | orchestrator | Saturday 03 May 2025 00:49:14 +0000 (0:00:00.344) 0:05:48.099 ********** 2025-05-03 00:56:21.619195 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619201 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619210 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619216 | orchestrator | 2025-05-03 00:56:21.619221 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-03 00:56:21.619227 | orchestrator | Saturday 03 May 2025 00:49:14 +0000 (0:00:00.382) 0:05:48.482 ********** 2025-05-03 00:56:21.619233 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619239 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619245 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619284 | orchestrator | 2025-05-03 00:56:21.619291 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-03 00:56:21.619297 | orchestrator | Saturday 03 May 2025 00:49:14 +0000 (0:00:00.412) 0:05:48.894 ********** 2025-05-03 00:56:21.619302 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.619308 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.619314 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.619320 | orchestrator | 2025-05-03 00:56:21.619326 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-03 00:56:21.619332 | orchestrator | Saturday 03 May 2025 00:49:16 +0000 (0:00:01.102) 0:05:49.996 ********** 2025-05-03 00:56:21.619337 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619343 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619349 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619355 | orchestrator | 2025-05-03 00:56:21.619360 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-03 00:56:21.619372 | orchestrator | Saturday 03 May 2025 00:49:16 +0000 (0:00:00.348) 0:05:50.345 ********** 2025-05-03 00:56:21.619378 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.619384 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.619390 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.619396 | orchestrator | 2025-05-03 00:56:21.619402 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-03 00:56:21.619408 | orchestrator | Saturday 03 May 2025 00:49:16 +0000 (0:00:00.349) 0:05:50.695 ********** 2025-05-03 00:56:21.619414 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619419 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619425 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619431 | orchestrator | 2025-05-03 00:56:21.619437 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-03 00:56:21.619442 | orchestrator | Saturday 03 May 2025 00:49:17 +0000 (0:00:00.337) 0:05:51.032 ********** 2025-05-03 00:56:21.619448 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619454 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619460 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619466 | orchestrator | 2025-05-03 00:56:21.619471 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-03 00:56:21.619477 | orchestrator | Saturday 03 May 2025 00:49:17 +0000 (0:00:00.747) 0:05:51.779 ********** 2025-05-03 00:56:21.619483 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619489 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619495 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619500 | orchestrator | 2025-05-03 00:56:21.619506 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-03 00:56:21.619512 | orchestrator | Saturday 03 May 2025 00:49:18 +0000 (0:00:00.337) 0:05:52.116 ********** 2025-05-03 00:56:21.619518 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619524 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619529 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619535 | orchestrator | 2025-05-03 00:56:21.619541 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-03 00:56:21.619550 | orchestrator | Saturday 03 May 2025 00:49:18 +0000 (0:00:00.330) 0:05:52.447 ********** 2025-05-03 00:56:21.619571 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619577 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619583 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619589 | orchestrator | 2025-05-03 00:56:21.619595 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-03 00:56:21.619601 | orchestrator | Saturday 03 May 2025 00:49:19 +0000 (0:00:00.636) 0:05:53.083 ********** 2025-05-03 00:56:21.619607 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.619613 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.619619 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.619627 | orchestrator | 2025-05-03 00:56:21.619633 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-03 00:56:21.619639 | orchestrator | Saturday 03 May 2025 00:49:19 +0000 (0:00:00.403) 0:05:53.487 ********** 2025-05-03 00:56:21.619645 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.619651 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.619657 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.619662 | orchestrator | 2025-05-03 00:56:21.619668 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-03 00:56:21.619674 | orchestrator | Saturday 03 May 2025 00:49:19 +0000 (0:00:00.433) 0:05:53.920 ********** 2025-05-03 00:56:21.619680 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619686 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619692 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619697 | orchestrator | 2025-05-03 00:56:21.619703 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-03 00:56:21.619709 | orchestrator | Saturday 03 May 2025 00:49:20 +0000 (0:00:00.397) 0:05:54.318 ********** 2025-05-03 00:56:21.619719 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619725 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619730 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619736 | orchestrator | 2025-05-03 00:56:21.619742 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-03 00:56:21.619748 | orchestrator | Saturday 03 May 2025 00:49:20 +0000 (0:00:00.436) 0:05:54.754 ********** 2025-05-03 00:56:21.619754 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619760 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619765 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619771 | orchestrator | 2025-05-03 00:56:21.619777 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-03 00:56:21.619783 | orchestrator | Saturday 03 May 2025 00:49:21 +0000 (0:00:00.308) 0:05:55.063 ********** 2025-05-03 00:56:21.619789 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619794 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619800 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619806 | orchestrator | 2025-05-03 00:56:21.619812 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-03 00:56:21.619817 | orchestrator | Saturday 03 May 2025 00:49:21 +0000 (0:00:00.294) 0:05:55.357 ********** 2025-05-03 00:56:21.619823 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619829 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619835 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619840 | orchestrator | 2025-05-03 00:56:21.619846 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-03 00:56:21.619852 | orchestrator | Saturday 03 May 2025 00:49:21 +0000 (0:00:00.341) 0:05:55.699 ********** 2025-05-03 00:56:21.619858 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619863 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619869 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619875 | orchestrator | 2025-05-03 00:56:21.619881 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-03 00:56:21.619887 | orchestrator | Saturday 03 May 2025 00:49:22 +0000 (0:00:00.482) 0:05:56.181 ********** 2025-05-03 00:56:21.619893 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619898 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619904 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619910 | orchestrator | 2025-05-03 00:56:21.619916 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-03 00:56:21.619922 | orchestrator | Saturday 03 May 2025 00:49:22 +0000 (0:00:00.312) 0:05:56.494 ********** 2025-05-03 00:56:21.619928 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619933 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619939 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619945 | orchestrator | 2025-05-03 00:56:21.619951 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-03 00:56:21.619957 | orchestrator | Saturday 03 May 2025 00:49:22 +0000 (0:00:00.313) 0:05:56.807 ********** 2025-05-03 00:56:21.619963 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.619968 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.619974 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.619980 | orchestrator | 2025-05-03 00:56:21.619986 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-03 00:56:21.619992 | orchestrator | Saturday 03 May 2025 00:49:23 +0000 (0:00:00.288) 0:05:57.096 ********** 2025-05-03 00:56:21.619998 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620003 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620009 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620015 | orchestrator | 2025-05-03 00:56:21.620021 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-03 00:56:21.620027 | orchestrator | Saturday 03 May 2025 00:49:23 +0000 (0:00:00.515) 0:05:57.611 ********** 2025-05-03 00:56:21.620036 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620042 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620048 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620054 | orchestrator | 2025-05-03 00:56:21.620059 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-03 00:56:21.620065 | orchestrator | Saturday 03 May 2025 00:49:23 +0000 (0:00:00.335) 0:05:57.947 ********** 2025-05-03 00:56:21.620071 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620077 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620083 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620089 | orchestrator | 2025-05-03 00:56:21.620107 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-03 00:56:21.620117 | orchestrator | Saturday 03 May 2025 00:49:24 +0000 (0:00:00.288) 0:05:58.235 ********** 2025-05-03 00:56:21.620123 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.620129 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.620135 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620141 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.620147 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.620152 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620158 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.620164 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.620170 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620176 | orchestrator | 2025-05-03 00:56:21.620182 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-03 00:56:21.620187 | orchestrator | Saturday 03 May 2025 00:49:24 +0000 (0:00:00.333) 0:05:58.569 ********** 2025-05-03 00:56:21.620193 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-03 00:56:21.620199 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-03 00:56:21.620205 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620211 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-03 00:56:21.620217 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-03 00:56:21.620223 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620229 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-03 00:56:21.620235 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-03 00:56:21.620241 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620247 | orchestrator | 2025-05-03 00:56:21.620263 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-03 00:56:21.620269 | orchestrator | Saturday 03 May 2025 00:49:25 +0000 (0:00:00.599) 0:05:59.168 ********** 2025-05-03 00:56:21.620275 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620281 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620287 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620293 | orchestrator | 2025-05-03 00:56:21.620299 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-03 00:56:21.620307 | orchestrator | Saturday 03 May 2025 00:49:25 +0000 (0:00:00.427) 0:05:59.596 ********** 2025-05-03 00:56:21.620313 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620319 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620325 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620333 | orchestrator | 2025-05-03 00:56:21.620339 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.620345 | orchestrator | Saturday 03 May 2025 00:49:26 +0000 (0:00:00.451) 0:06:00.048 ********** 2025-05-03 00:56:21.620351 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620357 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620362 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620372 | orchestrator | 2025-05-03 00:56:21.620378 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.620384 | orchestrator | Saturday 03 May 2025 00:49:26 +0000 (0:00:00.365) 0:06:00.414 ********** 2025-05-03 00:56:21.620389 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620395 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620401 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620407 | orchestrator | 2025-05-03 00:56:21.620413 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.620419 | orchestrator | Saturday 03 May 2025 00:49:27 +0000 (0:00:00.662) 0:06:01.076 ********** 2025-05-03 00:56:21.620424 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620430 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620436 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620442 | orchestrator | 2025-05-03 00:56:21.620448 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.620453 | orchestrator | Saturday 03 May 2025 00:49:27 +0000 (0:00:00.364) 0:06:01.441 ********** 2025-05-03 00:56:21.620459 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620465 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620471 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620477 | orchestrator | 2025-05-03 00:56:21.620482 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.620488 | orchestrator | Saturday 03 May 2025 00:49:27 +0000 (0:00:00.380) 0:06:01.821 ********** 2025-05-03 00:56:21.620494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.620500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.620506 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.620512 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620517 | orchestrator | 2025-05-03 00:56:21.620523 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.620529 | orchestrator | Saturday 03 May 2025 00:49:28 +0000 (0:00:00.428) 0:06:02.250 ********** 2025-05-03 00:56:21.620535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.620541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.620546 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.620552 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620558 | orchestrator | 2025-05-03 00:56:21.620564 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.620570 | orchestrator | Saturday 03 May 2025 00:49:28 +0000 (0:00:00.418) 0:06:02.668 ********** 2025-05-03 00:56:21.620576 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.620581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.620587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.620593 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620599 | orchestrator | 2025-05-03 00:56:21.620619 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.620625 | orchestrator | Saturday 03 May 2025 00:49:29 +0000 (0:00:00.661) 0:06:03.330 ********** 2025-05-03 00:56:21.620631 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620637 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620643 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620649 | orchestrator | 2025-05-03 00:56:21.620655 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.620661 | orchestrator | Saturday 03 May 2025 00:49:30 +0000 (0:00:00.686) 0:06:04.016 ********** 2025-05-03 00:56:21.620669 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.620675 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620681 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.620687 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620697 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.620703 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620709 | orchestrator | 2025-05-03 00:56:21.620715 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.620721 | orchestrator | Saturday 03 May 2025 00:49:30 +0000 (0:00:00.500) 0:06:04.516 ********** 2025-05-03 00:56:21.620726 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620732 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620738 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620744 | orchestrator | 2025-05-03 00:56:21.620750 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.620756 | orchestrator | Saturday 03 May 2025 00:49:30 +0000 (0:00:00.350) 0:06:04.867 ********** 2025-05-03 00:56:21.620761 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620767 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620773 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620779 | orchestrator | 2025-05-03 00:56:21.620785 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.620791 | orchestrator | Saturday 03 May 2025 00:49:31 +0000 (0:00:00.340) 0:06:05.207 ********** 2025-05-03 00:56:21.620796 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.620802 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620808 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.620814 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620820 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.620826 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620831 | orchestrator | 2025-05-03 00:56:21.620837 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.620843 | orchestrator | Saturday 03 May 2025 00:49:32 +0000 (0:00:00.804) 0:06:06.012 ********** 2025-05-03 00:56:21.620849 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620855 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620861 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620867 | orchestrator | 2025-05-03 00:56:21.620872 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.620878 | orchestrator | Saturday 03 May 2025 00:49:32 +0000 (0:00:00.366) 0:06:06.378 ********** 2025-05-03 00:56:21.620884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.620890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.620896 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.620901 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620907 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-03 00:56:21.620913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-03 00:56:21.620919 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-03 00:56:21.620925 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620930 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-03 00:56:21.620936 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-03 00:56:21.620942 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-03 00:56:21.620948 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620954 | orchestrator | 2025-05-03 00:56:21.620959 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-03 00:56:21.620965 | orchestrator | Saturday 03 May 2025 00:49:33 +0000 (0:00:00.905) 0:06:07.284 ********** 2025-05-03 00:56:21.620971 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.620977 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.620983 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.620988 | orchestrator | 2025-05-03 00:56:21.620994 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-03 00:56:21.621003 | orchestrator | Saturday 03 May 2025 00:49:33 +0000 (0:00:00.623) 0:06:07.907 ********** 2025-05-03 00:56:21.621009 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621015 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621021 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.621027 | orchestrator | 2025-05-03 00:56:21.621035 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-03 00:56:21.621041 | orchestrator | Saturday 03 May 2025 00:49:34 +0000 (0:00:00.838) 0:06:08.745 ********** 2025-05-03 00:56:21.621047 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621052 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621058 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.621064 | orchestrator | 2025-05-03 00:56:21.621070 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-03 00:56:21.621076 | orchestrator | Saturday 03 May 2025 00:49:35 +0000 (0:00:00.609) 0:06:09.355 ********** 2025-05-03 00:56:21.621081 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621087 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621093 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.621099 | orchestrator | 2025-05-03 00:56:21.621105 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-03 00:56:21.621111 | orchestrator | Saturday 03 May 2025 00:49:36 +0000 (0:00:00.867) 0:06:10.222 ********** 2025-05-03 00:56:21.621129 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:56:21.621136 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:56:21.621142 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:56:21.621148 | orchestrator | 2025-05-03 00:56:21.621154 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-03 00:56:21.621160 | orchestrator | Saturday 03 May 2025 00:49:36 +0000 (0:00:00.573) 0:06:10.796 ********** 2025-05-03 00:56:21.621165 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.621171 | orchestrator | 2025-05-03 00:56:21.621177 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-03 00:56:21.621183 | orchestrator | Saturday 03 May 2025 00:49:37 +0000 (0:00:00.512) 0:06:11.309 ********** 2025-05-03 00:56:21.621189 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.621195 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.621200 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.621206 | orchestrator | 2025-05-03 00:56:21.621212 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-03 00:56:21.621218 | orchestrator | Saturday 03 May 2025 00:49:37 +0000 (0:00:00.648) 0:06:11.957 ********** 2025-05-03 00:56:21.621224 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621233 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621239 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.621245 | orchestrator | 2025-05-03 00:56:21.621262 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-03 00:56:21.621268 | orchestrator | Saturday 03 May 2025 00:49:38 +0000 (0:00:00.508) 0:06:12.466 ********** 2025-05-03 00:56:21.621274 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-03 00:56:21.621280 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-03 00:56:21.621286 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-03 00:56:21.621292 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-03 00:56:21.621298 | orchestrator | 2025-05-03 00:56:21.621303 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-03 00:56:21.621309 | orchestrator | Saturday 03 May 2025 00:49:46 +0000 (0:00:07.629) 0:06:20.095 ********** 2025-05-03 00:56:21.621315 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.621321 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.621327 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.621336 | orchestrator | 2025-05-03 00:56:21.621342 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-03 00:56:21.621348 | orchestrator | Saturday 03 May 2025 00:49:46 +0000 (0:00:00.596) 0:06:20.691 ********** 2025-05-03 00:56:21.621354 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-03 00:56:21.621360 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-03 00:56:21.621365 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-03 00:56:21.621371 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-03 00:56:21.621377 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:56:21.621383 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:56:21.621389 | orchestrator | 2025-05-03 00:56:21.621394 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-03 00:56:21.621400 | orchestrator | Saturday 03 May 2025 00:49:48 +0000 (0:00:01.963) 0:06:22.655 ********** 2025-05-03 00:56:21.621406 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-03 00:56:21.621412 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-03 00:56:21.621417 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-03 00:56:21.621423 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-03 00:56:21.621429 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-03 00:56:21.621434 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-03 00:56:21.621440 | orchestrator | 2025-05-03 00:56:21.621446 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-03 00:56:21.621452 | orchestrator | Saturday 03 May 2025 00:49:49 +0000 (0:00:01.238) 0:06:23.893 ********** 2025-05-03 00:56:21.621458 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.621463 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.621469 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.621475 | orchestrator | 2025-05-03 00:56:21.621481 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-03 00:56:21.621486 | orchestrator | Saturday 03 May 2025 00:49:50 +0000 (0:00:01.011) 0:06:24.904 ********** 2025-05-03 00:56:21.621492 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621498 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621504 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.621509 | orchestrator | 2025-05-03 00:56:21.621515 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-03 00:56:21.621521 | orchestrator | Saturday 03 May 2025 00:49:51 +0000 (0:00:00.367) 0:06:25.272 ********** 2025-05-03 00:56:21.621527 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621533 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621538 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.621544 | orchestrator | 2025-05-03 00:56:21.621550 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-03 00:56:21.621556 | orchestrator | Saturday 03 May 2025 00:49:51 +0000 (0:00:00.363) 0:06:25.635 ********** 2025-05-03 00:56:21.621562 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.621568 | orchestrator | 2025-05-03 00:56:21.621577 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-03 00:56:21.621583 | orchestrator | Saturday 03 May 2025 00:49:52 +0000 (0:00:00.864) 0:06:26.499 ********** 2025-05-03 00:56:21.621589 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621595 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621601 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.621607 | orchestrator | 2025-05-03 00:56:21.621626 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-03 00:56:21.621633 | orchestrator | Saturday 03 May 2025 00:49:52 +0000 (0:00:00.384) 0:06:26.885 ********** 2025-05-03 00:56:21.621639 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621645 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621654 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.621660 | orchestrator | 2025-05-03 00:56:21.621666 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-03 00:56:21.621672 | orchestrator | Saturday 03 May 2025 00:49:53 +0000 (0:00:00.388) 0:06:27.273 ********** 2025-05-03 00:56:21.621678 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.621683 | orchestrator | 2025-05-03 00:56:21.621689 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-03 00:56:21.621695 | orchestrator | Saturday 03 May 2025 00:49:54 +0000 (0:00:00.818) 0:06:28.092 ********** 2025-05-03 00:56:21.621701 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.621706 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.621712 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.621718 | orchestrator | 2025-05-03 00:56:21.621724 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-03 00:56:21.621730 | orchestrator | Saturday 03 May 2025 00:49:55 +0000 (0:00:01.249) 0:06:29.341 ********** 2025-05-03 00:56:21.621735 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.621741 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.621747 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.621753 | orchestrator | 2025-05-03 00:56:21.621759 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-03 00:56:21.621764 | orchestrator | Saturday 03 May 2025 00:49:56 +0000 (0:00:01.255) 0:06:30.597 ********** 2025-05-03 00:56:21.621770 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.621776 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.621782 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.621788 | orchestrator | 2025-05-03 00:56:21.621793 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-03 00:56:21.621799 | orchestrator | Saturday 03 May 2025 00:49:58 +0000 (0:00:01.917) 0:06:32.515 ********** 2025-05-03 00:56:21.621805 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.621811 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.621817 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.621823 | orchestrator | 2025-05-03 00:56:21.621829 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-03 00:56:21.621834 | orchestrator | Saturday 03 May 2025 00:50:00 +0000 (0:00:01.901) 0:06:34.417 ********** 2025-05-03 00:56:21.621840 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.621846 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.621852 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-03 00:56:21.621858 | orchestrator | 2025-05-03 00:56:21.621864 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-03 00:56:21.621869 | orchestrator | Saturday 03 May 2025 00:50:01 +0000 (0:00:00.596) 0:06:35.014 ********** 2025-05-03 00:56:21.621875 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-03 00:56:21.621881 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-03 00:56:21.621887 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-03 00:56:21.621893 | orchestrator | 2025-05-03 00:56:21.621899 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-03 00:56:21.621905 | orchestrator | Saturday 03 May 2025 00:50:14 +0000 (0:00:13.547) 0:06:48.561 ********** 2025-05-03 00:56:21.621910 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-03 00:56:21.621916 | orchestrator | 2025-05-03 00:56:21.621922 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-03 00:56:21.621928 | orchestrator | Saturday 03 May 2025 00:50:16 +0000 (0:00:01.759) 0:06:50.320 ********** 2025-05-03 00:56:21.621933 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.621939 | orchestrator | 2025-05-03 00:56:21.621948 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-03 00:56:21.621954 | orchestrator | Saturday 03 May 2025 00:50:16 +0000 (0:00:00.431) 0:06:50.752 ********** 2025-05-03 00:56:21.621960 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.621966 | orchestrator | 2025-05-03 00:56:21.621972 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-03 00:56:21.621977 | orchestrator | Saturday 03 May 2025 00:50:17 +0000 (0:00:00.288) 0:06:51.040 ********** 2025-05-03 00:56:21.621983 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-03 00:56:21.621989 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-03 00:56:21.621995 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-03 00:56:21.622001 | orchestrator | 2025-05-03 00:56:21.622006 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-03 00:56:21.622029 | orchestrator | Saturday 03 May 2025 00:50:24 +0000 (0:00:07.168) 0:06:58.208 ********** 2025-05-03 00:56:21.622037 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-03 00:56:21.622043 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-03 00:56:21.622049 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-03 00:56:21.622054 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-03 00:56:21.622060 | orchestrator | 2025-05-03 00:56:21.622066 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-03 00:56:21.622072 | orchestrator | Saturday 03 May 2025 00:50:29 +0000 (0:00:04.871) 0:07:03.080 ********** 2025-05-03 00:56:21.622092 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.622099 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.622104 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.622110 | orchestrator | 2025-05-03 00:56:21.622116 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-03 00:56:21.622122 | orchestrator | Saturday 03 May 2025 00:50:29 +0000 (0:00:00.901) 0:07:03.981 ********** 2025-05-03 00:56:21.622128 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:21.622134 | orchestrator | 2025-05-03 00:56:21.622140 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-03 00:56:21.622145 | orchestrator | Saturday 03 May 2025 00:50:30 +0000 (0:00:00.579) 0:07:04.561 ********** 2025-05-03 00:56:21.622151 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.622157 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.622163 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.622169 | orchestrator | 2025-05-03 00:56:21.622175 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-03 00:56:21.622180 | orchestrator | Saturday 03 May 2025 00:50:30 +0000 (0:00:00.345) 0:07:04.906 ********** 2025-05-03 00:56:21.622186 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.622192 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.622198 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.622204 | orchestrator | 2025-05-03 00:56:21.622209 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-03 00:56:21.622215 | orchestrator | Saturday 03 May 2025 00:50:32 +0000 (0:00:01.292) 0:07:06.199 ********** 2025-05-03 00:56:21.622221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:56:21.622227 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:56:21.622233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:56:21.622238 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.622244 | orchestrator | 2025-05-03 00:56:21.622276 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-03 00:56:21.622283 | orchestrator | Saturday 03 May 2025 00:50:32 +0000 (0:00:00.689) 0:07:06.888 ********** 2025-05-03 00:56:21.622294 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.622300 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.622305 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.622314 | orchestrator | 2025-05-03 00:56:21.622320 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-03 00:56:21.622326 | orchestrator | Saturday 03 May 2025 00:50:33 +0000 (0:00:00.373) 0:07:07.261 ********** 2025-05-03 00:56:21.622332 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.622338 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.622344 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.622349 | orchestrator | 2025-05-03 00:56:21.622355 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-03 00:56:21.622361 | orchestrator | 2025-05-03 00:56:21.622367 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-03 00:56:21.622373 | orchestrator | Saturday 03 May 2025 00:50:35 +0000 (0:00:02.069) 0:07:09.330 ********** 2025-05-03 00:56:21.622379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.622384 | orchestrator | 2025-05-03 00:56:21.622390 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-03 00:56:21.622396 | orchestrator | Saturday 03 May 2025 00:50:36 +0000 (0:00:00.823) 0:07:10.154 ********** 2025-05-03 00:56:21.622402 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622408 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622414 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622420 | orchestrator | 2025-05-03 00:56:21.622425 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-03 00:56:21.622431 | orchestrator | Saturday 03 May 2025 00:50:36 +0000 (0:00:00.331) 0:07:10.485 ********** 2025-05-03 00:56:21.622437 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.622443 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.622449 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.622455 | orchestrator | 2025-05-03 00:56:21.622460 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-03 00:56:21.622466 | orchestrator | Saturday 03 May 2025 00:50:37 +0000 (0:00:00.955) 0:07:11.440 ********** 2025-05-03 00:56:21.622472 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.622478 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.622484 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.622489 | orchestrator | 2025-05-03 00:56:21.622495 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-03 00:56:21.622501 | orchestrator | Saturday 03 May 2025 00:50:38 +0000 (0:00:00.670) 0:07:12.111 ********** 2025-05-03 00:56:21.622507 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.622513 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.622518 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.622524 | orchestrator | 2025-05-03 00:56:21.622530 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-03 00:56:21.622536 | orchestrator | Saturday 03 May 2025 00:50:38 +0000 (0:00:00.637) 0:07:12.749 ********** 2025-05-03 00:56:21.622542 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622547 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622553 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622559 | orchestrator | 2025-05-03 00:56:21.622565 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-03 00:56:21.622571 | orchestrator | Saturday 03 May 2025 00:50:39 +0000 (0:00:00.274) 0:07:13.023 ********** 2025-05-03 00:56:21.622576 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622582 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622588 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622594 | orchestrator | 2025-05-03 00:56:21.622602 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-03 00:56:21.622608 | orchestrator | Saturday 03 May 2025 00:50:39 +0000 (0:00:00.462) 0:07:13.486 ********** 2025-05-03 00:56:21.622617 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622623 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622644 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622651 | orchestrator | 2025-05-03 00:56:21.622657 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-03 00:56:21.622663 | orchestrator | Saturday 03 May 2025 00:50:39 +0000 (0:00:00.270) 0:07:13.757 ********** 2025-05-03 00:56:21.622668 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622674 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622680 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622686 | orchestrator | 2025-05-03 00:56:21.622692 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-03 00:56:21.622698 | orchestrator | Saturday 03 May 2025 00:50:40 +0000 (0:00:00.270) 0:07:14.027 ********** 2025-05-03 00:56:21.622703 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622709 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622715 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622721 | orchestrator | 2025-05-03 00:56:21.622727 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-03 00:56:21.622733 | orchestrator | Saturday 03 May 2025 00:50:40 +0000 (0:00:00.277) 0:07:14.304 ********** 2025-05-03 00:56:21.622738 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622744 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622750 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622756 | orchestrator | 2025-05-03 00:56:21.622762 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-03 00:56:21.622767 | orchestrator | Saturday 03 May 2025 00:50:40 +0000 (0:00:00.454) 0:07:14.758 ********** 2025-05-03 00:56:21.622773 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.622779 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.622785 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.622791 | orchestrator | 2025-05-03 00:56:21.622797 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-03 00:56:21.622802 | orchestrator | Saturday 03 May 2025 00:50:41 +0000 (0:00:00.682) 0:07:15.441 ********** 2025-05-03 00:56:21.622808 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622814 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622820 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622825 | orchestrator | 2025-05-03 00:56:21.622831 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-03 00:56:21.622837 | orchestrator | Saturday 03 May 2025 00:50:41 +0000 (0:00:00.279) 0:07:15.720 ********** 2025-05-03 00:56:21.622843 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622849 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.622855 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.622860 | orchestrator | 2025-05-03 00:56:21.622866 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-03 00:56:21.622872 | orchestrator | Saturday 03 May 2025 00:50:41 +0000 (0:00:00.267) 0:07:15.988 ********** 2025-05-03 00:56:21.622878 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.622884 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.622890 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.622895 | orchestrator | 2025-05-03 00:56:21.622901 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-03 00:56:21.622907 | orchestrator | Saturday 03 May 2025 00:50:42 +0000 (0:00:00.477) 0:07:16.466 ********** 2025-05-03 00:56:21.622913 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.622919 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.622925 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.622931 | orchestrator | 2025-05-03 00:56:21.622937 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-03 00:56:21.622942 | orchestrator | Saturday 03 May 2025 00:50:42 +0000 (0:00:00.303) 0:07:16.769 ********** 2025-05-03 00:56:21.622948 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.622954 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.622964 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.622973 | orchestrator | 2025-05-03 00:56:21.622979 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-03 00:56:21.622984 | orchestrator | Saturday 03 May 2025 00:50:43 +0000 (0:00:00.284) 0:07:17.054 ********** 2025-05-03 00:56:21.622990 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.622996 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623002 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623008 | orchestrator | 2025-05-03 00:56:21.623013 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-03 00:56:21.623019 | orchestrator | Saturday 03 May 2025 00:50:43 +0000 (0:00:00.270) 0:07:17.324 ********** 2025-05-03 00:56:21.623025 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623031 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623037 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623043 | orchestrator | 2025-05-03 00:56:21.623048 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-03 00:56:21.623054 | orchestrator | Saturday 03 May 2025 00:50:43 +0000 (0:00:00.401) 0:07:17.726 ********** 2025-05-03 00:56:21.623060 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623066 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623071 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623077 | orchestrator | 2025-05-03 00:56:21.623083 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-03 00:56:21.623089 | orchestrator | Saturday 03 May 2025 00:50:43 +0000 (0:00:00.259) 0:07:17.985 ********** 2025-05-03 00:56:21.623094 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.623100 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.623106 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.623112 | orchestrator | 2025-05-03 00:56:21.623118 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-03 00:56:21.623123 | orchestrator | Saturday 03 May 2025 00:50:44 +0000 (0:00:00.291) 0:07:18.276 ********** 2025-05-03 00:56:21.623129 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623135 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623141 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623147 | orchestrator | 2025-05-03 00:56:21.623152 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-03 00:56:21.623158 | orchestrator | Saturday 03 May 2025 00:50:44 +0000 (0:00:00.276) 0:07:18.552 ********** 2025-05-03 00:56:21.623164 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623170 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623176 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623182 | orchestrator | 2025-05-03 00:56:21.623203 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-03 00:56:21.623210 | orchestrator | Saturday 03 May 2025 00:50:45 +0000 (0:00:00.522) 0:07:19.075 ********** 2025-05-03 00:56:21.623216 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623222 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623228 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623234 | orchestrator | 2025-05-03 00:56:21.623240 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-03 00:56:21.623245 | orchestrator | Saturday 03 May 2025 00:50:45 +0000 (0:00:00.306) 0:07:19.382 ********** 2025-05-03 00:56:21.623261 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623268 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623273 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623279 | orchestrator | 2025-05-03 00:56:21.623285 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-03 00:56:21.623291 | orchestrator | Saturday 03 May 2025 00:50:45 +0000 (0:00:00.293) 0:07:19.676 ********** 2025-05-03 00:56:21.623297 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623303 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623308 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623318 | orchestrator | 2025-05-03 00:56:21.623324 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-03 00:56:21.623330 | orchestrator | Saturday 03 May 2025 00:50:45 +0000 (0:00:00.272) 0:07:19.948 ********** 2025-05-03 00:56:21.623336 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623342 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623347 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623353 | orchestrator | 2025-05-03 00:56:21.623359 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-03 00:56:21.623365 | orchestrator | Saturday 03 May 2025 00:50:46 +0000 (0:00:00.423) 0:07:20.372 ********** 2025-05-03 00:56:21.623370 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623376 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623382 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623388 | orchestrator | 2025-05-03 00:56:21.623394 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-03 00:56:21.623400 | orchestrator | Saturday 03 May 2025 00:50:46 +0000 (0:00:00.321) 0:07:20.694 ********** 2025-05-03 00:56:21.623405 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623411 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623417 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623423 | orchestrator | 2025-05-03 00:56:21.623429 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-03 00:56:21.623435 | orchestrator | Saturday 03 May 2025 00:50:47 +0000 (0:00:00.332) 0:07:21.026 ********** 2025-05-03 00:56:21.623440 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623446 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623452 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623458 | orchestrator | 2025-05-03 00:56:21.623464 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-03 00:56:21.623469 | orchestrator | Saturday 03 May 2025 00:50:47 +0000 (0:00:00.320) 0:07:21.347 ********** 2025-05-03 00:56:21.623475 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623481 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623487 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623493 | orchestrator | 2025-05-03 00:56:21.623499 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-03 00:56:21.623504 | orchestrator | Saturday 03 May 2025 00:50:47 +0000 (0:00:00.632) 0:07:21.979 ********** 2025-05-03 00:56:21.623510 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623516 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623522 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623527 | orchestrator | 2025-05-03 00:56:21.623533 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-03 00:56:21.623539 | orchestrator | Saturday 03 May 2025 00:50:48 +0000 (0:00:00.341) 0:07:22.320 ********** 2025-05-03 00:56:21.623545 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623551 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623556 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623562 | orchestrator | 2025-05-03 00:56:21.623568 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-03 00:56:21.623574 | orchestrator | Saturday 03 May 2025 00:50:48 +0000 (0:00:00.325) 0:07:22.646 ********** 2025-05-03 00:56:21.623580 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.623585 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.623591 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.623597 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.623603 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623608 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623614 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.623624 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.623630 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623639 | orchestrator | 2025-05-03 00:56:21.623645 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-03 00:56:21.623651 | orchestrator | Saturday 03 May 2025 00:50:49 +0000 (0:00:00.358) 0:07:23.004 ********** 2025-05-03 00:56:21.623656 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-03 00:56:21.623665 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-03 00:56:21.623671 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623677 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-03 00:56:21.623683 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-03 00:56:21.623688 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623694 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-03 00:56:21.623700 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-03 00:56:21.623719 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623726 | orchestrator | 2025-05-03 00:56:21.623732 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-03 00:56:21.623738 | orchestrator | Saturday 03 May 2025 00:50:49 +0000 (0:00:00.730) 0:07:23.735 ********** 2025-05-03 00:56:21.623744 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623750 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623755 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623761 | orchestrator | 2025-05-03 00:56:21.623767 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-03 00:56:21.623773 | orchestrator | Saturday 03 May 2025 00:50:50 +0000 (0:00:00.405) 0:07:24.141 ********** 2025-05-03 00:56:21.623779 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623784 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623790 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623796 | orchestrator | 2025-05-03 00:56:21.623802 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.623808 | orchestrator | Saturday 03 May 2025 00:50:50 +0000 (0:00:00.327) 0:07:24.469 ********** 2025-05-03 00:56:21.623814 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623819 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623825 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623831 | orchestrator | 2025-05-03 00:56:21.623837 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.623843 | orchestrator | Saturday 03 May 2025 00:50:50 +0000 (0:00:00.328) 0:07:24.797 ********** 2025-05-03 00:56:21.623848 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623854 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623860 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623866 | orchestrator | 2025-05-03 00:56:21.623872 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.623878 | orchestrator | Saturday 03 May 2025 00:50:51 +0000 (0:00:00.667) 0:07:25.464 ********** 2025-05-03 00:56:21.623884 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623889 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623895 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623901 | orchestrator | 2025-05-03 00:56:21.623907 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.623912 | orchestrator | Saturday 03 May 2025 00:50:51 +0000 (0:00:00.363) 0:07:25.828 ********** 2025-05-03 00:56:21.623918 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623924 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.623930 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.623936 | orchestrator | 2025-05-03 00:56:21.623942 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.623952 | orchestrator | Saturday 03 May 2025 00:50:52 +0000 (0:00:00.316) 0:07:26.145 ********** 2025-05-03 00:56:21.623962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.623968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.623974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.623979 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.623985 | orchestrator | 2025-05-03 00:56:21.623991 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.623997 | orchestrator | Saturday 03 May 2025 00:50:52 +0000 (0:00:00.430) 0:07:26.576 ********** 2025-05-03 00:56:21.624003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.624009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.624014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.624020 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624026 | orchestrator | 2025-05-03 00:56:21.624032 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.624038 | orchestrator | Saturday 03 May 2025 00:50:53 +0000 (0:00:00.421) 0:07:26.997 ********** 2025-05-03 00:56:21.624044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.624049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.624055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.624061 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624067 | orchestrator | 2025-05-03 00:56:21.624073 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.624078 | orchestrator | Saturday 03 May 2025 00:50:53 +0000 (0:00:00.742) 0:07:27.740 ********** 2025-05-03 00:56:21.624084 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624090 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624096 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624102 | orchestrator | 2025-05-03 00:56:21.624107 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.624113 | orchestrator | Saturday 03 May 2025 00:50:54 +0000 (0:00:00.616) 0:07:28.356 ********** 2025-05-03 00:56:21.624119 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.624125 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624131 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.624136 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624142 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.624148 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624154 | orchestrator | 2025-05-03 00:56:21.624160 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.624166 | orchestrator | Saturday 03 May 2025 00:50:54 +0000 (0:00:00.516) 0:07:28.873 ********** 2025-05-03 00:56:21.624171 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624177 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624183 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624189 | orchestrator | 2025-05-03 00:56:21.624195 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.624200 | orchestrator | Saturday 03 May 2025 00:50:55 +0000 (0:00:00.388) 0:07:29.261 ********** 2025-05-03 00:56:21.624206 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624225 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624232 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624238 | orchestrator | 2025-05-03 00:56:21.624244 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.624264 | orchestrator | Saturday 03 May 2025 00:50:55 +0000 (0:00:00.335) 0:07:29.597 ********** 2025-05-03 00:56:21.624270 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.624276 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.624282 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624292 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624297 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.624303 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624309 | orchestrator | 2025-05-03 00:56:21.624315 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.624321 | orchestrator | Saturday 03 May 2025 00:50:56 +0000 (0:00:00.734) 0:07:30.331 ********** 2025-05-03 00:56:21.624327 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.624332 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624338 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.624344 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624350 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.624356 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624362 | orchestrator | 2025-05-03 00:56:21.624368 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.624373 | orchestrator | Saturday 03 May 2025 00:50:56 +0000 (0:00:00.292) 0:07:30.624 ********** 2025-05-03 00:56:21.624379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.624385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.624391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.624397 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624403 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:56:21.624409 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:56:21.624414 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:56:21.624420 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624426 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:56:21.624432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:56:21.624438 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:56:21.624444 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624449 | orchestrator | 2025-05-03 00:56:21.624455 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-03 00:56:21.624461 | orchestrator | Saturday 03 May 2025 00:50:57 +0000 (0:00:00.547) 0:07:31.172 ********** 2025-05-03 00:56:21.624467 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624473 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624479 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624484 | orchestrator | 2025-05-03 00:56:21.624490 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-03 00:56:21.624496 | orchestrator | Saturday 03 May 2025 00:50:57 +0000 (0:00:00.703) 0:07:31.876 ********** 2025-05-03 00:56:21.624502 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.624508 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624514 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-03 00:56:21.624520 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624525 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-03 00:56:21.624531 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624537 | orchestrator | 2025-05-03 00:56:21.624543 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-03 00:56:21.624549 | orchestrator | Saturday 03 May 2025 00:50:58 +0000 (0:00:00.489) 0:07:32.365 ********** 2025-05-03 00:56:21.624555 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624564 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624570 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624578 | orchestrator | 2025-05-03 00:56:21.624584 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-03 00:56:21.624590 | orchestrator | Saturday 03 May 2025 00:50:59 +0000 (0:00:00.647) 0:07:33.013 ********** 2025-05-03 00:56:21.624596 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624602 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624607 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624613 | orchestrator | 2025-05-03 00:56:21.624619 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-03 00:56:21.624625 | orchestrator | Saturday 03 May 2025 00:50:59 +0000 (0:00:00.469) 0:07:33.482 ********** 2025-05-03 00:56:21.624631 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.624636 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.624642 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.624648 | orchestrator | 2025-05-03 00:56:21.624654 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-03 00:56:21.624659 | orchestrator | Saturday 03 May 2025 00:50:59 +0000 (0:00:00.482) 0:07:33.965 ********** 2025-05-03 00:56:21.624668 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-03 00:56:21.624674 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:56:21.624680 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:56:21.624685 | orchestrator | 2025-05-03 00:56:21.624706 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-03 00:56:21.624713 | orchestrator | Saturday 03 May 2025 00:51:00 +0000 (0:00:00.608) 0:07:34.573 ********** 2025-05-03 00:56:21.624719 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.624725 | orchestrator | 2025-05-03 00:56:21.624730 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-03 00:56:21.624736 | orchestrator | Saturday 03 May 2025 00:51:01 +0000 (0:00:00.461) 0:07:35.035 ********** 2025-05-03 00:56:21.624742 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624748 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624754 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624759 | orchestrator | 2025-05-03 00:56:21.624765 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-03 00:56:21.624771 | orchestrator | Saturday 03 May 2025 00:51:01 +0000 (0:00:00.464) 0:07:35.500 ********** 2025-05-03 00:56:21.624777 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624783 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624789 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624795 | orchestrator | 2025-05-03 00:56:21.624801 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-03 00:56:21.624806 | orchestrator | Saturday 03 May 2025 00:51:01 +0000 (0:00:00.300) 0:07:35.800 ********** 2025-05-03 00:56:21.624812 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624818 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624824 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624830 | orchestrator | 2025-05-03 00:56:21.624835 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-03 00:56:21.624841 | orchestrator | Saturday 03 May 2025 00:51:02 +0000 (0:00:00.311) 0:07:36.112 ********** 2025-05-03 00:56:21.624847 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.624853 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.624859 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.624865 | orchestrator | 2025-05-03 00:56:21.624870 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-03 00:56:21.624876 | orchestrator | Saturday 03 May 2025 00:51:02 +0000 (0:00:00.320) 0:07:36.433 ********** 2025-05-03 00:56:21.624882 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.624888 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.624897 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.624903 | orchestrator | 2025-05-03 00:56:21.624909 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-03 00:56:21.624915 | orchestrator | Saturday 03 May 2025 00:51:03 +0000 (0:00:00.893) 0:07:37.326 ********** 2025-05-03 00:56:21.624920 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.624926 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.624932 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.624938 | orchestrator | 2025-05-03 00:56:21.624943 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-03 00:56:21.624949 | orchestrator | Saturday 03 May 2025 00:51:03 +0000 (0:00:00.365) 0:07:37.692 ********** 2025-05-03 00:56:21.624955 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-03 00:56:21.624964 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-03 00:56:21.624970 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-03 00:56:21.624976 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-03 00:56:21.624982 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-03 00:56:21.624988 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-03 00:56:21.624994 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-03 00:56:21.624999 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-03 00:56:21.625005 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-03 00:56:21.625011 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-03 00:56:21.625017 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-03 00:56:21.625023 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-03 00:56:21.625029 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-03 00:56:21.625035 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-03 00:56:21.625040 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-03 00:56:21.625046 | orchestrator | 2025-05-03 00:56:21.625052 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-03 00:56:21.625058 | orchestrator | Saturday 03 May 2025 00:51:05 +0000 (0:00:02.221) 0:07:39.913 ********** 2025-05-03 00:56:21.625064 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.625070 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.625075 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.625081 | orchestrator | 2025-05-03 00:56:21.625087 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-03 00:56:21.625093 | orchestrator | Saturday 03 May 2025 00:51:06 +0000 (0:00:00.312) 0:07:40.225 ********** 2025-05-03 00:56:21.625099 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.625105 | orchestrator | 2025-05-03 00:56:21.625113 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-03 00:56:21.625132 | orchestrator | Saturday 03 May 2025 00:51:07 +0000 (0:00:00.819) 0:07:41.045 ********** 2025-05-03 00:56:21.625139 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-03 00:56:21.625145 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-03 00:56:21.625151 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-03 00:56:21.625156 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-03 00:56:21.625162 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-03 00:56:21.625171 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-03 00:56:21.625177 | orchestrator | 2025-05-03 00:56:21.625183 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-03 00:56:21.625189 | orchestrator | Saturday 03 May 2025 00:51:08 +0000 (0:00:00.987) 0:07:42.033 ********** 2025-05-03 00:56:21.625195 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:56:21.625200 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.625206 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-03 00:56:21.625212 | orchestrator | 2025-05-03 00:56:21.625218 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-03 00:56:21.625224 | orchestrator | Saturday 03 May 2025 00:51:09 +0000 (0:00:01.791) 0:07:43.824 ********** 2025-05-03 00:56:21.625229 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-03 00:56:21.625235 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.625245 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.625265 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-03 00:56:21.625271 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-03 00:56:21.625277 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.625283 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-03 00:56:21.625289 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-03 00:56:21.625295 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.625300 | orchestrator | 2025-05-03 00:56:21.625306 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-03 00:56:21.625312 | orchestrator | Saturday 03 May 2025 00:51:11 +0000 (0:00:01.496) 0:07:45.321 ********** 2025-05-03 00:56:21.625318 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-03 00:56:21.625324 | orchestrator | 2025-05-03 00:56:21.625330 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-03 00:56:21.625335 | orchestrator | Saturday 03 May 2025 00:51:13 +0000 (0:00:02.298) 0:07:47.619 ********** 2025-05-03 00:56:21.625341 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.625347 | orchestrator | 2025-05-03 00:56:21.625353 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-03 00:56:21.625359 | orchestrator | Saturday 03 May 2025 00:51:14 +0000 (0:00:00.536) 0:07:48.155 ********** 2025-05-03 00:56:21.625365 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.625371 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.625377 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.625383 | orchestrator | 2025-05-03 00:56:21.625388 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-03 00:56:21.625394 | orchestrator | Saturday 03 May 2025 00:51:14 +0000 (0:00:00.532) 0:07:48.688 ********** 2025-05-03 00:56:21.625400 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.625406 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.625412 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.625418 | orchestrator | 2025-05-03 00:56:21.625424 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-03 00:56:21.625429 | orchestrator | Saturday 03 May 2025 00:51:15 +0000 (0:00:00.332) 0:07:49.021 ********** 2025-05-03 00:56:21.625438 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.625444 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.625450 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.625456 | orchestrator | 2025-05-03 00:56:21.625462 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-03 00:56:21.625468 | orchestrator | Saturday 03 May 2025 00:51:15 +0000 (0:00:00.356) 0:07:49.377 ********** 2025-05-03 00:56:21.625477 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.625483 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.625488 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.625494 | orchestrator | 2025-05-03 00:56:21.625500 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-03 00:56:21.625506 | orchestrator | Saturday 03 May 2025 00:51:15 +0000 (0:00:00.331) 0:07:49.709 ********** 2025-05-03 00:56:21.625512 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.625518 | orchestrator | 2025-05-03 00:56:21.625523 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-03 00:56:21.625529 | orchestrator | Saturday 03 May 2025 00:51:16 +0000 (0:00:00.912) 0:07:50.621 ********** 2025-05-03 00:56:21.625535 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-63c4e6bd-963b-5ec8-a8d0-e52c79716553', 'data_vg': 'ceph-63c4e6bd-963b-5ec8-a8d0-e52c79716553'}) 2025-05-03 00:56:21.625543 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eca5292b-8794-515a-ad73-b5efc7970d6a', 'data_vg': 'ceph-eca5292b-8794-515a-ad73-b5efc7970d6a'}) 2025-05-03 00:56:21.625549 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ba494882-e80b-5600-bb3d-47da88e10312', 'data_vg': 'ceph-ba494882-e80b-5600-bb3d-47da88e10312'}) 2025-05-03 00:56:21.625569 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f0db6d06-6fa6-557d-977f-52f0cf84ead8', 'data_vg': 'ceph-f0db6d06-6fa6-557d-977f-52f0cf84ead8'}) 2025-05-03 00:56:21.625577 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7a18630-ef35-59a0-a2f0-363b4ab3cd76', 'data_vg': 'ceph-a7a18630-ef35-59a0-a2f0-363b4ab3cd76'}) 2025-05-03 00:56:21.625583 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1900210e-f5cf-596b-8948-bbf6ca001e1a', 'data_vg': 'ceph-1900210e-f5cf-596b-8948-bbf6ca001e1a'}) 2025-05-03 00:56:21.625588 | orchestrator | 2025-05-03 00:56:21.625594 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-03 00:56:21.625600 | orchestrator | Saturday 03 May 2025 00:51:58 +0000 (0:00:42.011) 0:08:32.633 ********** 2025-05-03 00:56:21.625606 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.625612 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.625618 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.625624 | orchestrator | 2025-05-03 00:56:21.625630 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-03 00:56:21.625635 | orchestrator | Saturday 03 May 2025 00:51:59 +0000 (0:00:00.456) 0:08:33.090 ********** 2025-05-03 00:56:21.625641 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.625647 | orchestrator | 2025-05-03 00:56:21.625653 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-03 00:56:21.625661 | orchestrator | Saturday 03 May 2025 00:51:59 +0000 (0:00:00.594) 0:08:33.684 ********** 2025-05-03 00:56:21.625667 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.625673 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.625679 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.625689 | orchestrator | 2025-05-03 00:56:21.625695 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-03 00:56:21.625701 | orchestrator | Saturday 03 May 2025 00:52:00 +0000 (0:00:00.649) 0:08:34.334 ********** 2025-05-03 00:56:21.625707 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.625713 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.625718 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.625724 | orchestrator | 2025-05-03 00:56:21.625730 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-03 00:56:21.625736 | orchestrator | Saturday 03 May 2025 00:52:02 +0000 (0:00:01.904) 0:08:36.238 ********** 2025-05-03 00:56:21.625742 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.625752 | orchestrator | 2025-05-03 00:56:21.625757 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-03 00:56:21.625763 | orchestrator | Saturday 03 May 2025 00:52:02 +0000 (0:00:00.556) 0:08:36.795 ********** 2025-05-03 00:56:21.625769 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.625775 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.625781 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.625787 | orchestrator | 2025-05-03 00:56:21.625792 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-03 00:56:21.625801 | orchestrator | Saturday 03 May 2025 00:52:04 +0000 (0:00:01.426) 0:08:38.221 ********** 2025-05-03 00:56:21.625807 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.625813 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.625819 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.625824 | orchestrator | 2025-05-03 00:56:21.625830 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-03 00:56:21.625836 | orchestrator | Saturday 03 May 2025 00:52:05 +0000 (0:00:01.175) 0:08:39.397 ********** 2025-05-03 00:56:21.625842 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.625848 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.625853 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.625859 | orchestrator | 2025-05-03 00:56:21.625865 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-03 00:56:21.625871 | orchestrator | Saturday 03 May 2025 00:52:07 +0000 (0:00:01.623) 0:08:41.021 ********** 2025-05-03 00:56:21.625877 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.625882 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.625888 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.625894 | orchestrator | 2025-05-03 00:56:21.625900 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-03 00:56:21.625906 | orchestrator | Saturday 03 May 2025 00:52:07 +0000 (0:00:00.362) 0:08:41.383 ********** 2025-05-03 00:56:21.625912 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.625917 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.625923 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.625929 | orchestrator | 2025-05-03 00:56:21.625935 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-03 00:56:21.625940 | orchestrator | Saturday 03 May 2025 00:52:07 +0000 (0:00:00.599) 0:08:41.983 ********** 2025-05-03 00:56:21.625946 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-05-03 00:56:21.625952 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-03 00:56:21.625958 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-03 00:56:21.625964 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-03 00:56:21.625970 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-05-03 00:56:21.625975 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-05-03 00:56:21.625981 | orchestrator | 2025-05-03 00:56:21.625987 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-03 00:56:21.625993 | orchestrator | Saturday 03 May 2025 00:52:09 +0000 (0:00:01.047) 0:08:43.030 ********** 2025-05-03 00:56:21.625999 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-05-03 00:56:21.626004 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-05-03 00:56:21.626010 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-03 00:56:21.626031 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-03 00:56:21.626038 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-05-03 00:56:21.626044 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-05-03 00:56:21.626049 | orchestrator | 2025-05-03 00:56:21.626069 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-03 00:56:21.626076 | orchestrator | Saturday 03 May 2025 00:52:12 +0000 (0:00:03.306) 0:08:46.337 ********** 2025-05-03 00:56:21.626082 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626088 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.626094 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-03 00:56:21.626104 | orchestrator | 2025-05-03 00:56:21.626110 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-03 00:56:21.626115 | orchestrator | Saturday 03 May 2025 00:52:15 +0000 (0:00:02.714) 0:08:49.052 ********** 2025-05-03 00:56:21.626121 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626127 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.626133 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-03 00:56:21.626139 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-03 00:56:21.626145 | orchestrator | 2025-05-03 00:56:21.626150 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-03 00:56:21.626156 | orchestrator | Saturday 03 May 2025 00:52:27 +0000 (0:00:12.451) 0:09:01.503 ********** 2025-05-03 00:56:21.626162 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626168 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.626174 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.626180 | orchestrator | 2025-05-03 00:56:21.626186 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-03 00:56:21.626191 | orchestrator | Saturday 03 May 2025 00:52:27 +0000 (0:00:00.458) 0:09:01.961 ********** 2025-05-03 00:56:21.626197 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626203 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.626209 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.626215 | orchestrator | 2025-05-03 00:56:21.626220 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-03 00:56:21.626226 | orchestrator | Saturday 03 May 2025 00:52:29 +0000 (0:00:01.125) 0:09:03.087 ********** 2025-05-03 00:56:21.626232 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.626238 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.626243 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.626249 | orchestrator | 2025-05-03 00:56:21.626287 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-03 00:56:21.626294 | orchestrator | Saturday 03 May 2025 00:52:29 +0000 (0:00:00.861) 0:09:03.948 ********** 2025-05-03 00:56:21.626300 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.626305 | orchestrator | 2025-05-03 00:56:21.626311 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-03 00:56:21.626317 | orchestrator | Saturday 03 May 2025 00:52:30 +0000 (0:00:00.539) 0:09:04.488 ********** 2025-05-03 00:56:21.626323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.626329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.626334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.626340 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626346 | orchestrator | 2025-05-03 00:56:21.626352 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-03 00:56:21.626358 | orchestrator | Saturday 03 May 2025 00:52:30 +0000 (0:00:00.397) 0:09:04.886 ********** 2025-05-03 00:56:21.626363 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626369 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.626375 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.626381 | orchestrator | 2025-05-03 00:56:21.626386 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-03 00:56:21.626392 | orchestrator | Saturday 03 May 2025 00:52:31 +0000 (0:00:00.304) 0:09:05.190 ********** 2025-05-03 00:56:21.626398 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626404 | orchestrator | 2025-05-03 00:56:21.626410 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-03 00:56:21.626416 | orchestrator | Saturday 03 May 2025 00:52:31 +0000 (0:00:00.234) 0:09:05.425 ********** 2025-05-03 00:56:21.626421 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626431 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.626437 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.626443 | orchestrator | 2025-05-03 00:56:21.626449 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-03 00:56:21.626457 | orchestrator | Saturday 03 May 2025 00:52:32 +0000 (0:00:00.582) 0:09:06.007 ********** 2025-05-03 00:56:21.626464 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626470 | orchestrator | 2025-05-03 00:56:21.626475 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-03 00:56:21.626481 | orchestrator | Saturday 03 May 2025 00:52:32 +0000 (0:00:00.239) 0:09:06.246 ********** 2025-05-03 00:56:21.626487 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626493 | orchestrator | 2025-05-03 00:56:21.626499 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-03 00:56:21.626505 | orchestrator | Saturday 03 May 2025 00:52:32 +0000 (0:00:00.231) 0:09:06.478 ********** 2025-05-03 00:56:21.626511 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626517 | orchestrator | 2025-05-03 00:56:21.626523 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-03 00:56:21.626528 | orchestrator | Saturday 03 May 2025 00:52:32 +0000 (0:00:00.130) 0:09:06.609 ********** 2025-05-03 00:56:21.626534 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626540 | orchestrator | 2025-05-03 00:56:21.626546 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-03 00:56:21.626552 | orchestrator | Saturday 03 May 2025 00:52:32 +0000 (0:00:00.229) 0:09:06.838 ********** 2025-05-03 00:56:21.626557 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626563 | orchestrator | 2025-05-03 00:56:21.626569 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-03 00:56:21.626589 | orchestrator | Saturday 03 May 2025 00:52:33 +0000 (0:00:00.236) 0:09:07.075 ********** 2025-05-03 00:56:21.626596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.626602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.626608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.626614 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626622 | orchestrator | 2025-05-03 00:56:21.626628 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-03 00:56:21.626634 | orchestrator | Saturday 03 May 2025 00:52:33 +0000 (0:00:00.428) 0:09:07.504 ********** 2025-05-03 00:56:21.626640 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626646 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.626652 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.626658 | orchestrator | 2025-05-03 00:56:21.626664 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-03 00:56:21.626669 | orchestrator | Saturday 03 May 2025 00:52:34 +0000 (0:00:00.582) 0:09:08.086 ********** 2025-05-03 00:56:21.626675 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626681 | orchestrator | 2025-05-03 00:56:21.626687 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-03 00:56:21.626693 | orchestrator | Saturday 03 May 2025 00:52:34 +0000 (0:00:00.246) 0:09:08.333 ********** 2025-05-03 00:56:21.626698 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626704 | orchestrator | 2025-05-03 00:56:21.626710 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-03 00:56:21.626716 | orchestrator | Saturday 03 May 2025 00:52:34 +0000 (0:00:00.235) 0:09:08.568 ********** 2025-05-03 00:56:21.626722 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.626728 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.626733 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.626739 | orchestrator | 2025-05-03 00:56:21.626745 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-03 00:56:21.626751 | orchestrator | 2025-05-03 00:56:21.626757 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-03 00:56:21.626766 | orchestrator | Saturday 03 May 2025 00:52:37 +0000 (0:00:03.000) 0:09:11.568 ********** 2025-05-03 00:56:21.626772 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.626778 | orchestrator | 2025-05-03 00:56:21.626784 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-03 00:56:21.626790 | orchestrator | Saturday 03 May 2025 00:52:39 +0000 (0:00:01.513) 0:09:13.081 ********** 2025-05-03 00:56:21.626796 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.626802 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.626808 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.626813 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.626819 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.626825 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.626831 | orchestrator | 2025-05-03 00:56:21.626837 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-03 00:56:21.626843 | orchestrator | Saturday 03 May 2025 00:52:39 +0000 (0:00:00.713) 0:09:13.795 ********** 2025-05-03 00:56:21.626848 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.626854 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.626860 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.626866 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.626872 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.626878 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.626884 | orchestrator | 2025-05-03 00:56:21.626889 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-03 00:56:21.626895 | orchestrator | Saturday 03 May 2025 00:52:41 +0000 (0:00:01.364) 0:09:15.160 ********** 2025-05-03 00:56:21.626901 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.626907 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.626913 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.626919 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.626924 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.626930 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.626936 | orchestrator | 2025-05-03 00:56:21.626942 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-03 00:56:21.626948 | orchestrator | Saturday 03 May 2025 00:52:42 +0000 (0:00:01.307) 0:09:16.467 ********** 2025-05-03 00:56:21.626954 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.626960 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.626966 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.626971 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.626977 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.626983 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.626989 | orchestrator | 2025-05-03 00:56:21.626995 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-03 00:56:21.627003 | orchestrator | Saturday 03 May 2025 00:52:43 +0000 (0:00:01.023) 0:09:17.491 ********** 2025-05-03 00:56:21.627009 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627015 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.627021 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627027 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.627033 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627039 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.627044 | orchestrator | 2025-05-03 00:56:21.627050 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-03 00:56:21.627056 | orchestrator | Saturday 03 May 2025 00:52:44 +0000 (0:00:00.947) 0:09:18.439 ********** 2025-05-03 00:56:21.627062 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627068 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627074 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627079 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627085 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627094 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627100 | orchestrator | 2025-05-03 00:56:21.627106 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-03 00:56:21.627112 | orchestrator | Saturday 03 May 2025 00:52:45 +0000 (0:00:00.683) 0:09:19.123 ********** 2025-05-03 00:56:21.627118 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627136 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627143 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627149 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627155 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627161 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627167 | orchestrator | 2025-05-03 00:56:21.627176 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-03 00:56:21.627182 | orchestrator | Saturday 03 May 2025 00:52:46 +0000 (0:00:00.892) 0:09:20.015 ********** 2025-05-03 00:56:21.627187 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627207 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627213 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627219 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627225 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627230 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627236 | orchestrator | 2025-05-03 00:56:21.627242 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-03 00:56:21.627248 | orchestrator | Saturday 03 May 2025 00:52:46 +0000 (0:00:00.638) 0:09:20.654 ********** 2025-05-03 00:56:21.627265 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627271 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627277 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627283 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627289 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627295 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627301 | orchestrator | 2025-05-03 00:56:21.627307 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-03 00:56:21.627313 | orchestrator | Saturday 03 May 2025 00:52:47 +0000 (0:00:00.882) 0:09:21.537 ********** 2025-05-03 00:56:21.627319 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627325 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627331 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627336 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627342 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627348 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627354 | orchestrator | 2025-05-03 00:56:21.627360 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-03 00:56:21.627366 | orchestrator | Saturday 03 May 2025 00:52:48 +0000 (0:00:00.995) 0:09:22.533 ********** 2025-05-03 00:56:21.627372 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.627378 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.627384 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.627389 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.627395 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.627401 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.627407 | orchestrator | 2025-05-03 00:56:21.627413 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-03 00:56:21.627419 | orchestrator | Saturday 03 May 2025 00:52:50 +0000 (0:00:01.536) 0:09:24.070 ********** 2025-05-03 00:56:21.627425 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627431 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627437 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627442 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627448 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627454 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627460 | orchestrator | 2025-05-03 00:56:21.627466 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-03 00:56:21.627475 | orchestrator | Saturday 03 May 2025 00:52:50 +0000 (0:00:00.647) 0:09:24.717 ********** 2025-05-03 00:56:21.627481 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.627487 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.627492 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.627498 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627504 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627510 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627516 | orchestrator | 2025-05-03 00:56:21.627522 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-03 00:56:21.627528 | orchestrator | Saturday 03 May 2025 00:52:51 +0000 (0:00:00.719) 0:09:25.437 ********** 2025-05-03 00:56:21.627533 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627539 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627545 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627551 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.627557 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.627563 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.627568 | orchestrator | 2025-05-03 00:56:21.627574 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-03 00:56:21.627580 | orchestrator | Saturday 03 May 2025 00:52:52 +0000 (0:00:00.581) 0:09:26.019 ********** 2025-05-03 00:56:21.627586 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627591 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627597 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627603 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.627609 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.627615 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.627621 | orchestrator | 2025-05-03 00:56:21.627626 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-03 00:56:21.627632 | orchestrator | Saturday 03 May 2025 00:52:52 +0000 (0:00:00.680) 0:09:26.699 ********** 2025-05-03 00:56:21.627638 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627644 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627650 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627656 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.627662 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.627688 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.627695 | orchestrator | 2025-05-03 00:56:21.627701 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-03 00:56:21.627707 | orchestrator | Saturday 03 May 2025 00:52:53 +0000 (0:00:00.554) 0:09:27.254 ********** 2025-05-03 00:56:21.627713 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627719 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627725 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627731 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627737 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627742 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627748 | orchestrator | 2025-05-03 00:56:21.627754 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-03 00:56:21.627775 | orchestrator | Saturday 03 May 2025 00:52:53 +0000 (0:00:00.666) 0:09:27.920 ********** 2025-05-03 00:56:21.627781 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627788 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627794 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627799 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627805 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627811 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627817 | orchestrator | 2025-05-03 00:56:21.627823 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-03 00:56:21.627829 | orchestrator | Saturday 03 May 2025 00:52:54 +0000 (0:00:00.585) 0:09:28.506 ********** 2025-05-03 00:56:21.627834 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.627840 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.627846 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.627856 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627862 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627867 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627874 | orchestrator | 2025-05-03 00:56:21.627879 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-03 00:56:21.627885 | orchestrator | Saturday 03 May 2025 00:52:55 +0000 (0:00:00.870) 0:09:29.377 ********** 2025-05-03 00:56:21.627891 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.627897 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.627903 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.627908 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.627914 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.627920 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.627926 | orchestrator | 2025-05-03 00:56:21.627932 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-03 00:56:21.627940 | orchestrator | Saturday 03 May 2025 00:52:56 +0000 (0:00:00.680) 0:09:30.058 ********** 2025-05-03 00:56:21.627946 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.627952 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.627958 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.627964 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.627970 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.627975 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.627981 | orchestrator | 2025-05-03 00:56:21.627987 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-03 00:56:21.627993 | orchestrator | Saturday 03 May 2025 00:52:56 +0000 (0:00:00.909) 0:09:30.968 ********** 2025-05-03 00:56:21.627999 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628005 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628011 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628017 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628022 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628028 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628034 | orchestrator | 2025-05-03 00:56:21.628040 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-03 00:56:21.628046 | orchestrator | Saturday 03 May 2025 00:52:57 +0000 (0:00:00.641) 0:09:31.609 ********** 2025-05-03 00:56:21.628052 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628058 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628063 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628069 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628075 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628081 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628087 | orchestrator | 2025-05-03 00:56:21.628093 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-03 00:56:21.628099 | orchestrator | Saturday 03 May 2025 00:52:58 +0000 (0:00:00.909) 0:09:32.519 ********** 2025-05-03 00:56:21.628105 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628111 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628117 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628122 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628128 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628134 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628140 | orchestrator | 2025-05-03 00:56:21.628146 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-03 00:56:21.628152 | orchestrator | Saturday 03 May 2025 00:52:59 +0000 (0:00:00.646) 0:09:33.165 ********** 2025-05-03 00:56:21.628157 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628168 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628174 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628180 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628186 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628191 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628212 | orchestrator | 2025-05-03 00:56:21.628219 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-03 00:56:21.628224 | orchestrator | Saturday 03 May 2025 00:53:00 +0000 (0:00:00.947) 0:09:34.113 ********** 2025-05-03 00:56:21.628230 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628236 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628242 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628248 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628267 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628274 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628279 | orchestrator | 2025-05-03 00:56:21.628285 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-03 00:56:21.628291 | orchestrator | Saturday 03 May 2025 00:53:00 +0000 (0:00:00.669) 0:09:34.782 ********** 2025-05-03 00:56:21.628297 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628303 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628309 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628314 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628320 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628326 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628332 | orchestrator | 2025-05-03 00:56:21.628338 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-03 00:56:21.628344 | orchestrator | Saturday 03 May 2025 00:53:01 +0000 (0:00:01.178) 0:09:35.961 ********** 2025-05-03 00:56:21.628350 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628355 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628361 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628367 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628387 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628395 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628401 | orchestrator | 2025-05-03 00:56:21.628407 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-03 00:56:21.628412 | orchestrator | Saturday 03 May 2025 00:53:02 +0000 (0:00:00.658) 0:09:36.620 ********** 2025-05-03 00:56:21.628418 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628424 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628430 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628436 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628442 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628448 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628454 | orchestrator | 2025-05-03 00:56:21.628460 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-03 00:56:21.628466 | orchestrator | Saturday 03 May 2025 00:53:03 +0000 (0:00:00.908) 0:09:37.528 ********** 2025-05-03 00:56:21.628472 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628477 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628483 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628489 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628495 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628501 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628507 | orchestrator | 2025-05-03 00:56:21.628513 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-03 00:56:21.628518 | orchestrator | Saturday 03 May 2025 00:53:04 +0000 (0:00:00.677) 0:09:38.206 ********** 2025-05-03 00:56:21.628524 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628530 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628536 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628542 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628548 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628554 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628560 | orchestrator | 2025-05-03 00:56:21.628565 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-03 00:56:21.628575 | orchestrator | Saturday 03 May 2025 00:53:05 +0000 (0:00:00.944) 0:09:39.151 ********** 2025-05-03 00:56:21.628581 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628587 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628593 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628599 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628605 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628611 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628617 | orchestrator | 2025-05-03 00:56:21.628623 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-03 00:56:21.628628 | orchestrator | Saturday 03 May 2025 00:53:05 +0000 (0:00:00.717) 0:09:39.868 ********** 2025-05-03 00:56:21.628634 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.628640 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-03 00:56:21.628646 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628652 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.628658 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-03 00:56:21.628664 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628670 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.628676 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-03 00:56:21.628682 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628688 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.628694 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.628699 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628708 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.628714 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.628720 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628726 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.628732 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.628738 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628744 | orchestrator | 2025-05-03 00:56:21.628750 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-03 00:56:21.628756 | orchestrator | Saturday 03 May 2025 00:53:06 +0000 (0:00:01.072) 0:09:40.941 ********** 2025-05-03 00:56:21.628761 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-03 00:56:21.628770 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-03 00:56:21.628776 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628782 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-03 00:56:21.628788 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-03 00:56:21.628793 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628799 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-03 00:56:21.628805 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-03 00:56:21.628811 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628817 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-03 00:56:21.628822 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-03 00:56:21.628828 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628834 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-03 00:56:21.628840 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-03 00:56:21.628845 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628851 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-03 00:56:21.628857 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-03 00:56:21.628863 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628869 | orchestrator | 2025-05-03 00:56:21.628874 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-03 00:56:21.628884 | orchestrator | Saturday 03 May 2025 00:53:07 +0000 (0:00:00.616) 0:09:41.558 ********** 2025-05-03 00:56:21.628902 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628908 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628914 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628920 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628926 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628932 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628941 | orchestrator | 2025-05-03 00:56:21.628947 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-03 00:56:21.628953 | orchestrator | Saturday 03 May 2025 00:53:08 +0000 (0:00:00.706) 0:09:42.265 ********** 2025-05-03 00:56:21.628958 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.628964 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.628970 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.628976 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.628981 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.628987 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.628993 | orchestrator | 2025-05-03 00:56:21.628999 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.629005 | orchestrator | Saturday 03 May 2025 00:53:08 +0000 (0:00:00.558) 0:09:42.823 ********** 2025-05-03 00:56:21.629011 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629016 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629022 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629028 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629033 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629039 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629045 | orchestrator | 2025-05-03 00:56:21.629051 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.629057 | orchestrator | Saturday 03 May 2025 00:53:09 +0000 (0:00:00.759) 0:09:43.583 ********** 2025-05-03 00:56:21.629062 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629068 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629074 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629080 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629086 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629091 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629097 | orchestrator | 2025-05-03 00:56:21.629103 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.629109 | orchestrator | Saturday 03 May 2025 00:53:10 +0000 (0:00:00.580) 0:09:44.163 ********** 2025-05-03 00:56:21.629115 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629120 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629126 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629132 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629138 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629143 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629149 | orchestrator | 2025-05-03 00:56:21.629157 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.629163 | orchestrator | Saturday 03 May 2025 00:53:10 +0000 (0:00:00.777) 0:09:44.941 ********** 2025-05-03 00:56:21.629169 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629175 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629181 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629186 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629192 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629198 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629203 | orchestrator | 2025-05-03 00:56:21.629209 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.629215 | orchestrator | Saturday 03 May 2025 00:53:11 +0000 (0:00:00.654) 0:09:45.596 ********** 2025-05-03 00:56:21.629224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.629230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.629236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.629242 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629248 | orchestrator | 2025-05-03 00:56:21.629282 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.629289 | orchestrator | Saturday 03 May 2025 00:53:11 +0000 (0:00:00.371) 0:09:45.967 ********** 2025-05-03 00:56:21.629295 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.629301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.629307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.629313 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629319 | orchestrator | 2025-05-03 00:56:21.629325 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.629331 | orchestrator | Saturday 03 May 2025 00:53:12 +0000 (0:00:00.631) 0:09:46.599 ********** 2025-05-03 00:56:21.629337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.629343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.629348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.629354 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629360 | orchestrator | 2025-05-03 00:56:21.629366 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.629372 | orchestrator | Saturday 03 May 2025 00:53:13 +0000 (0:00:00.717) 0:09:47.316 ********** 2025-05-03 00:56:21.629378 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629384 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629390 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629396 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629404 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629410 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629416 | orchestrator | 2025-05-03 00:56:21.629422 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.629428 | orchestrator | Saturday 03 May 2025 00:53:13 +0000 (0:00:00.602) 0:09:47.918 ********** 2025-05-03 00:56:21.629433 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.629439 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629445 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.629467 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629474 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.629480 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.629486 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629492 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629497 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.629503 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629509 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.629515 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629521 | orchestrator | 2025-05-03 00:56:21.629527 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.629533 | orchestrator | Saturday 03 May 2025 00:53:15 +0000 (0:00:01.503) 0:09:49.421 ********** 2025-05-03 00:56:21.629543 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629552 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629562 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629573 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629583 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629593 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629602 | orchestrator | 2025-05-03 00:56:21.629612 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.629627 | orchestrator | Saturday 03 May 2025 00:53:16 +0000 (0:00:00.741) 0:09:50.162 ********** 2025-05-03 00:56:21.629636 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629644 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629654 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629663 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629673 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629682 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629692 | orchestrator | 2025-05-03 00:56:21.629702 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.629713 | orchestrator | Saturday 03 May 2025 00:53:17 +0000 (0:00:00.954) 0:09:51.116 ********** 2025-05-03 00:56:21.629723 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-03 00:56:21.629729 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629735 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-03 00:56:21.629740 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629746 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-03 00:56:21.629752 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629758 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.629764 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629770 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.629775 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629781 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.629787 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629793 | orchestrator | 2025-05-03 00:56:21.629799 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.629805 | orchestrator | Saturday 03 May 2025 00:53:18 +0000 (0:00:00.988) 0:09:52.105 ********** 2025-05-03 00:56:21.629811 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629817 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629823 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629829 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.629835 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.629842 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.629848 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.629854 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.629860 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.629866 | orchestrator | 2025-05-03 00:56:21.629873 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.629879 | orchestrator | Saturday 03 May 2025 00:53:19 +0000 (0:00:01.218) 0:09:53.324 ********** 2025-05-03 00:56:21.629885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-03 00:56:21.629891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-03 00:56:21.629897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-03 00:56:21.629904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-03 00:56:21.629910 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-03 00:56:21.629916 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-03 00:56:21.629922 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.629928 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-03 00:56:21.629934 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-03 00:56:21.629940 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-03 00:56:21.629946 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.629953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.629963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.629969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.629975 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.629982 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:56:21.629988 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:56:21.629994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:56:21.630000 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.630006 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.630029 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:56:21.630062 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:56:21.630070 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:56:21.630077 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.630083 | orchestrator | 2025-05-03 00:56:21.630089 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-03 00:56:21.630095 | orchestrator | Saturday 03 May 2025 00:53:20 +0000 (0:00:01.477) 0:09:54.802 ********** 2025-05-03 00:56:21.630102 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.630108 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.630114 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.630120 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.630126 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.630133 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.630139 | orchestrator | 2025-05-03 00:56:21.630145 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-03 00:56:21.630151 | orchestrator | Saturday 03 May 2025 00:53:21 +0000 (0:00:01.058) 0:09:55.861 ********** 2025-05-03 00:56:21.630157 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.630163 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.630169 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.630176 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.630182 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.630188 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-03 00:56:21.630194 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.630200 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-03 00:56:21.630206 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.630212 | orchestrator | 2025-05-03 00:56:21.630219 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-03 00:56:21.630225 | orchestrator | Saturday 03 May 2025 00:53:22 +0000 (0:00:01.058) 0:09:56.919 ********** 2025-05-03 00:56:21.630231 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.630237 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.630247 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.630265 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.630271 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.630277 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.630284 | orchestrator | 2025-05-03 00:56:21.630290 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-03 00:56:21.630397 | orchestrator | Saturday 03 May 2025 00:53:23 +0000 (0:00:00.982) 0:09:57.901 ********** 2025-05-03 00:56:21.630404 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:21.630410 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:21.630416 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:21.630422 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.630428 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.630434 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.630440 | orchestrator | 2025-05-03 00:56:21.630447 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-03 00:56:21.630453 | orchestrator | Saturday 03 May 2025 00:53:24 +0000 (0:00:00.997) 0:09:58.899 ********** 2025-05-03 00:56:21.630465 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.630471 | orchestrator | 2025-05-03 00:56:21.630483 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-03 00:56:21.630489 | orchestrator | Saturday 03 May 2025 00:53:28 +0000 (0:00:03.247) 0:10:02.147 ********** 2025-05-03 00:56:21.630495 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.630502 | orchestrator | 2025-05-03 00:56:21.630508 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-03 00:56:21.630514 | orchestrator | Saturday 03 May 2025 00:53:29 +0000 (0:00:01.792) 0:10:03.939 ********** 2025-05-03 00:56:21.630520 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.630526 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.630533 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.630539 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.630545 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.630551 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.630557 | orchestrator | 2025-05-03 00:56:21.630563 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-03 00:56:21.630569 | orchestrator | Saturday 03 May 2025 00:53:31 +0000 (0:00:01.584) 0:10:05.523 ********** 2025-05-03 00:56:21.630576 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.630582 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.630588 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.630594 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.630600 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.630606 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.630612 | orchestrator | 2025-05-03 00:56:21.630619 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-03 00:56:21.630625 | orchestrator | Saturday 03 May 2025 00:53:32 +0000 (0:00:00.998) 0:10:06.521 ********** 2025-05-03 00:56:21.630631 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.630639 | orchestrator | 2025-05-03 00:56:21.630645 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-03 00:56:21.630651 | orchestrator | Saturday 03 May 2025 00:53:33 +0000 (0:00:01.105) 0:10:07.627 ********** 2025-05-03 00:56:21.630657 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.630664 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.630670 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.630676 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.630682 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.630688 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.630694 | orchestrator | 2025-05-03 00:56:21.630700 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-03 00:56:21.630707 | orchestrator | Saturday 03 May 2025 00:53:35 +0000 (0:00:01.604) 0:10:09.231 ********** 2025-05-03 00:56:21.630713 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.630719 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.630725 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.630731 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.630738 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.630747 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.630753 | orchestrator | 2025-05-03 00:56:21.630760 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-03 00:56:21.630766 | orchestrator | Saturday 03 May 2025 00:53:39 +0000 (0:00:04.297) 0:10:13.528 ********** 2025-05-03 00:56:21.630772 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.630779 | orchestrator | 2025-05-03 00:56:21.630785 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-03 00:56:21.630795 | orchestrator | Saturday 03 May 2025 00:53:40 +0000 (0:00:01.345) 0:10:14.873 ********** 2025-05-03 00:56:21.630801 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.630807 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.630813 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.630820 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.630826 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.630832 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.630838 | orchestrator | 2025-05-03 00:56:21.630844 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-03 00:56:21.630850 | orchestrator | Saturday 03 May 2025 00:53:41 +0000 (0:00:00.694) 0:10:15.568 ********** 2025-05-03 00:56:21.630856 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:21.630863 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:21.630869 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.630875 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.630881 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.630887 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:21.630893 | orchestrator | 2025-05-03 00:56:21.630899 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-03 00:56:21.630906 | orchestrator | Saturday 03 May 2025 00:53:44 +0000 (0:00:02.660) 0:10:18.228 ********** 2025-05-03 00:56:21.630912 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:21.630918 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:21.630927 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:21.630933 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.630939 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.630945 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.630951 | orchestrator | 2025-05-03 00:56:21.630958 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-03 00:56:21.630964 | orchestrator | 2025-05-03 00:56:21.630970 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-03 00:56:21.630976 | orchestrator | Saturday 03 May 2025 00:53:47 +0000 (0:00:02.859) 0:10:21.087 ********** 2025-05-03 00:56:21.630982 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.630992 | orchestrator | 2025-05-03 00:56:21.630998 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-03 00:56:21.631004 | orchestrator | Saturday 03 May 2025 00:53:47 +0000 (0:00:00.754) 0:10:21.841 ********** 2025-05-03 00:56:21.631010 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631016 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631023 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631029 | orchestrator | 2025-05-03 00:56:21.631035 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-03 00:56:21.631041 | orchestrator | Saturday 03 May 2025 00:53:48 +0000 (0:00:00.326) 0:10:22.168 ********** 2025-05-03 00:56:21.631047 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.631054 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.631060 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.631066 | orchestrator | 2025-05-03 00:56:21.631072 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-03 00:56:21.631078 | orchestrator | Saturday 03 May 2025 00:53:48 +0000 (0:00:00.761) 0:10:22.929 ********** 2025-05-03 00:56:21.631084 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.631090 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.631096 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.631103 | orchestrator | 2025-05-03 00:56:21.631109 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-03 00:56:21.631115 | orchestrator | Saturday 03 May 2025 00:53:50 +0000 (0:00:01.093) 0:10:24.022 ********** 2025-05-03 00:56:21.631121 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.631127 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.631133 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.631140 | orchestrator | 2025-05-03 00:56:21.631149 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-03 00:56:21.631159 | orchestrator | Saturday 03 May 2025 00:53:50 +0000 (0:00:00.839) 0:10:24.862 ********** 2025-05-03 00:56:21.631165 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631171 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631177 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631183 | orchestrator | 2025-05-03 00:56:21.631190 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-03 00:56:21.631196 | orchestrator | Saturday 03 May 2025 00:53:51 +0000 (0:00:00.331) 0:10:25.193 ********** 2025-05-03 00:56:21.631202 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631208 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631214 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631220 | orchestrator | 2025-05-03 00:56:21.631226 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-03 00:56:21.631233 | orchestrator | Saturday 03 May 2025 00:53:51 +0000 (0:00:00.339) 0:10:25.533 ********** 2025-05-03 00:56:21.631239 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631245 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631262 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631268 | orchestrator | 2025-05-03 00:56:21.631275 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-03 00:56:21.631281 | orchestrator | Saturday 03 May 2025 00:53:52 +0000 (0:00:00.572) 0:10:26.106 ********** 2025-05-03 00:56:21.631287 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631293 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631299 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631306 | orchestrator | 2025-05-03 00:56:21.631312 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-03 00:56:21.631323 | orchestrator | Saturday 03 May 2025 00:53:52 +0000 (0:00:00.329) 0:10:26.435 ********** 2025-05-03 00:56:21.631329 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631335 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631342 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631348 | orchestrator | 2025-05-03 00:56:21.631354 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-03 00:56:21.631360 | orchestrator | Saturday 03 May 2025 00:53:52 +0000 (0:00:00.366) 0:10:26.802 ********** 2025-05-03 00:56:21.631366 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631373 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631379 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631385 | orchestrator | 2025-05-03 00:56:21.631391 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-03 00:56:21.631397 | orchestrator | Saturday 03 May 2025 00:53:53 +0000 (0:00:00.337) 0:10:27.139 ********** 2025-05-03 00:56:21.631404 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.631410 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.631416 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.631422 | orchestrator | 2025-05-03 00:56:21.631428 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-03 00:56:21.631452 | orchestrator | Saturday 03 May 2025 00:53:54 +0000 (0:00:00.984) 0:10:28.123 ********** 2025-05-03 00:56:21.631459 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631491 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631497 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631504 | orchestrator | 2025-05-03 00:56:21.631510 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-03 00:56:21.631516 | orchestrator | Saturday 03 May 2025 00:53:54 +0000 (0:00:00.333) 0:10:28.457 ********** 2025-05-03 00:56:21.631522 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631529 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631535 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631541 | orchestrator | 2025-05-03 00:56:21.631547 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-03 00:56:21.631558 | orchestrator | Saturday 03 May 2025 00:53:54 +0000 (0:00:00.336) 0:10:28.793 ********** 2025-05-03 00:56:21.631564 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.631570 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.631576 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.631583 | orchestrator | 2025-05-03 00:56:21.631589 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-03 00:56:21.631595 | orchestrator | Saturday 03 May 2025 00:53:55 +0000 (0:00:00.339) 0:10:29.133 ********** 2025-05-03 00:56:21.631601 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.631607 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.631614 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.631620 | orchestrator | 2025-05-03 00:56:21.631626 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-03 00:56:21.631632 | orchestrator | Saturday 03 May 2025 00:53:55 +0000 (0:00:00.606) 0:10:29.739 ********** 2025-05-03 00:56:21.631638 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.631645 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.631654 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.631660 | orchestrator | 2025-05-03 00:56:21.631666 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-03 00:56:21.631673 | orchestrator | Saturday 03 May 2025 00:53:56 +0000 (0:00:00.347) 0:10:30.087 ********** 2025-05-03 00:56:21.631679 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631685 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631691 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631697 | orchestrator | 2025-05-03 00:56:21.631703 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-03 00:56:21.631710 | orchestrator | Saturday 03 May 2025 00:53:56 +0000 (0:00:00.341) 0:10:30.428 ********** 2025-05-03 00:56:21.631716 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631722 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631728 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631734 | orchestrator | 2025-05-03 00:56:21.631741 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-03 00:56:21.631747 | orchestrator | Saturday 03 May 2025 00:53:56 +0000 (0:00:00.327) 0:10:30.756 ********** 2025-05-03 00:56:21.631753 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631759 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631765 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631771 | orchestrator | 2025-05-03 00:56:21.631778 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-03 00:56:21.631784 | orchestrator | Saturday 03 May 2025 00:53:57 +0000 (0:00:00.576) 0:10:31.333 ********** 2025-05-03 00:56:21.631790 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.631796 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.631802 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.631808 | orchestrator | 2025-05-03 00:56:21.631817 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-03 00:56:21.631824 | orchestrator | Saturday 03 May 2025 00:53:57 +0000 (0:00:00.349) 0:10:31.682 ********** 2025-05-03 00:56:21.631830 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631836 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631843 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631849 | orchestrator | 2025-05-03 00:56:21.631855 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-03 00:56:21.631861 | orchestrator | Saturday 03 May 2025 00:53:58 +0000 (0:00:00.349) 0:10:32.032 ********** 2025-05-03 00:56:21.631867 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631873 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631880 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631886 | orchestrator | 2025-05-03 00:56:21.631892 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-03 00:56:21.631898 | orchestrator | Saturday 03 May 2025 00:53:58 +0000 (0:00:00.327) 0:10:32.360 ********** 2025-05-03 00:56:21.631908 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631914 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631920 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631926 | orchestrator | 2025-05-03 00:56:21.631932 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-03 00:56:21.631942 | orchestrator | Saturday 03 May 2025 00:53:59 +0000 (0:00:00.675) 0:10:33.036 ********** 2025-05-03 00:56:21.631949 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631955 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631961 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.631967 | orchestrator | 2025-05-03 00:56:21.631973 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-03 00:56:21.631980 | orchestrator | Saturday 03 May 2025 00:53:59 +0000 (0:00:00.348) 0:10:33.384 ********** 2025-05-03 00:56:21.631986 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.631992 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.631998 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632004 | orchestrator | 2025-05-03 00:56:21.632010 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-03 00:56:21.632017 | orchestrator | Saturday 03 May 2025 00:53:59 +0000 (0:00:00.327) 0:10:33.711 ********** 2025-05-03 00:56:21.632023 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632029 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632035 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632041 | orchestrator | 2025-05-03 00:56:21.632048 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-03 00:56:21.632054 | orchestrator | Saturday 03 May 2025 00:54:00 +0000 (0:00:00.314) 0:10:34.026 ********** 2025-05-03 00:56:21.632060 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632066 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632072 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632079 | orchestrator | 2025-05-03 00:56:21.632085 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-03 00:56:21.632091 | orchestrator | Saturday 03 May 2025 00:54:00 +0000 (0:00:00.602) 0:10:34.628 ********** 2025-05-03 00:56:21.632097 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632104 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632110 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632116 | orchestrator | 2025-05-03 00:56:21.632122 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-03 00:56:21.632128 | orchestrator | Saturday 03 May 2025 00:54:01 +0000 (0:00:00.376) 0:10:35.004 ********** 2025-05-03 00:56:21.632135 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632141 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632147 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632153 | orchestrator | 2025-05-03 00:56:21.632159 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-03 00:56:21.632166 | orchestrator | Saturday 03 May 2025 00:54:01 +0000 (0:00:00.337) 0:10:35.342 ********** 2025-05-03 00:56:21.632172 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632178 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632184 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632190 | orchestrator | 2025-05-03 00:56:21.632196 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-03 00:56:21.632203 | orchestrator | Saturday 03 May 2025 00:54:01 +0000 (0:00:00.352) 0:10:35.694 ********** 2025-05-03 00:56:21.632209 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632215 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632221 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632227 | orchestrator | 2025-05-03 00:56:21.632233 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-03 00:56:21.632239 | orchestrator | Saturday 03 May 2025 00:54:02 +0000 (0:00:00.589) 0:10:36.284 ********** 2025-05-03 00:56:21.632289 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632296 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632302 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632309 | orchestrator | 2025-05-03 00:56:21.632315 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-03 00:56:21.632322 | orchestrator | Saturday 03 May 2025 00:54:02 +0000 (0:00:00.354) 0:10:36.638 ********** 2025-05-03 00:56:21.632328 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.632334 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.632340 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632346 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.632353 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.632359 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632368 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.632377 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.632384 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632390 | orchestrator | 2025-05-03 00:56:21.632396 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-03 00:56:21.632402 | orchestrator | Saturday 03 May 2025 00:54:03 +0000 (0:00:00.386) 0:10:37.026 ********** 2025-05-03 00:56:21.632408 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-03 00:56:21.632414 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-03 00:56:21.632420 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632427 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-03 00:56:21.632433 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-03 00:56:21.632439 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632445 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-03 00:56:21.632451 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-03 00:56:21.632457 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632463 | orchestrator | 2025-05-03 00:56:21.632470 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-03 00:56:21.632476 | orchestrator | Saturday 03 May 2025 00:54:03 +0000 (0:00:00.355) 0:10:37.381 ********** 2025-05-03 00:56:21.632482 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632488 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632494 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632500 | orchestrator | 2025-05-03 00:56:21.632507 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-03 00:56:21.632516 | orchestrator | Saturday 03 May 2025 00:54:03 +0000 (0:00:00.597) 0:10:37.979 ********** 2025-05-03 00:56:21.632522 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632529 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632535 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632541 | orchestrator | 2025-05-03 00:56:21.632547 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.632554 | orchestrator | Saturday 03 May 2025 00:54:04 +0000 (0:00:00.349) 0:10:38.328 ********** 2025-05-03 00:56:21.632560 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632566 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632572 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632579 | orchestrator | 2025-05-03 00:56:21.632584 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.632590 | orchestrator | Saturday 03 May 2025 00:54:04 +0000 (0:00:00.341) 0:10:38.670 ********** 2025-05-03 00:56:21.632596 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632602 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632607 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632617 | orchestrator | 2025-05-03 00:56:21.632623 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.632631 | orchestrator | Saturday 03 May 2025 00:54:04 +0000 (0:00:00.323) 0:10:38.994 ********** 2025-05-03 00:56:21.632637 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632643 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632649 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632654 | orchestrator | 2025-05-03 00:56:21.632660 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.632666 | orchestrator | Saturday 03 May 2025 00:54:05 +0000 (0:00:00.647) 0:10:39.642 ********** 2025-05-03 00:56:21.632672 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632678 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632683 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632689 | orchestrator | 2025-05-03 00:56:21.632695 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.632701 | orchestrator | Saturday 03 May 2025 00:54:06 +0000 (0:00:00.371) 0:10:40.013 ********** 2025-05-03 00:56:21.632707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.632712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.632718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.632724 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632730 | orchestrator | 2025-05-03 00:56:21.632736 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.632741 | orchestrator | Saturday 03 May 2025 00:54:06 +0000 (0:00:00.439) 0:10:40.453 ********** 2025-05-03 00:56:21.632747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.632753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.632759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.632765 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632771 | orchestrator | 2025-05-03 00:56:21.632776 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.632782 | orchestrator | Saturday 03 May 2025 00:54:06 +0000 (0:00:00.439) 0:10:40.892 ********** 2025-05-03 00:56:21.632788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.632794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.632800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.632805 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632811 | orchestrator | 2025-05-03 00:56:21.632817 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.632823 | orchestrator | Saturday 03 May 2025 00:54:07 +0000 (0:00:00.435) 0:10:41.328 ********** 2025-05-03 00:56:21.632829 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632835 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632840 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632846 | orchestrator | 2025-05-03 00:56:21.632852 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.632858 | orchestrator | Saturday 03 May 2025 00:54:07 +0000 (0:00:00.331) 0:10:41.660 ********** 2025-05-03 00:56:21.632864 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.632869 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.632875 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632881 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632887 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.632893 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632898 | orchestrator | 2025-05-03 00:56:21.632904 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.632910 | orchestrator | Saturday 03 May 2025 00:54:08 +0000 (0:00:00.809) 0:10:42.469 ********** 2025-05-03 00:56:21.632921 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632927 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632933 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632939 | orchestrator | 2025-05-03 00:56:21.632944 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.632950 | orchestrator | Saturday 03 May 2025 00:54:08 +0000 (0:00:00.303) 0:10:42.773 ********** 2025-05-03 00:56:21.632956 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.632962 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.632968 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.632973 | orchestrator | 2025-05-03 00:56:21.632979 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.632985 | orchestrator | Saturday 03 May 2025 00:54:09 +0000 (0:00:00.295) 0:10:43.069 ********** 2025-05-03 00:56:21.632991 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.632997 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633003 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.633011 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633017 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.633023 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633029 | orchestrator | 2025-05-03 00:56:21.633035 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.633041 | orchestrator | Saturday 03 May 2025 00:54:09 +0000 (0:00:00.508) 0:10:43.577 ********** 2025-05-03 00:56:21.633047 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.633053 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633059 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.633065 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633071 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.633076 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633082 | orchestrator | 2025-05-03 00:56:21.633088 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.633094 | orchestrator | Saturday 03 May 2025 00:54:10 +0000 (0:00:00.556) 0:10:44.134 ********** 2025-05-03 00:56:21.633100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.633106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.633111 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:56:21.633117 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.633123 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:56:21.633129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:56:21.633134 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633140 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633146 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:56:21.633152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:56:21.633158 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:56:21.633164 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633169 | orchestrator | 2025-05-03 00:56:21.633175 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-03 00:56:21.633181 | orchestrator | Saturday 03 May 2025 00:54:10 +0000 (0:00:00.567) 0:10:44.702 ********** 2025-05-03 00:56:21.633187 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633193 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633198 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633204 | orchestrator | 2025-05-03 00:56:21.633210 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-03 00:56:21.633219 | orchestrator | Saturday 03 May 2025 00:54:11 +0000 (0:00:00.671) 0:10:45.373 ********** 2025-05-03 00:56:21.633225 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.633231 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633236 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-03 00:56:21.633242 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633248 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-03 00:56:21.633266 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633272 | orchestrator | 2025-05-03 00:56:21.633278 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-03 00:56:21.633284 | orchestrator | Saturday 03 May 2025 00:54:11 +0000 (0:00:00.531) 0:10:45.904 ********** 2025-05-03 00:56:21.633290 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633296 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633302 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633308 | orchestrator | 2025-05-03 00:56:21.633314 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-03 00:56:21.633320 | orchestrator | Saturday 03 May 2025 00:54:12 +0000 (0:00:00.674) 0:10:46.578 ********** 2025-05-03 00:56:21.633325 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633331 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633337 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633343 | orchestrator | 2025-05-03 00:56:21.633349 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-03 00:56:21.633358 | orchestrator | Saturday 03 May 2025 00:54:13 +0000 (0:00:00.497) 0:10:47.075 ********** 2025-05-03 00:56:21.633364 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633370 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633375 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-03 00:56:21.633381 | orchestrator | 2025-05-03 00:56:21.633387 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-03 00:56:21.633393 | orchestrator | Saturday 03 May 2025 00:54:13 +0000 (0:00:00.406) 0:10:47.482 ********** 2025-05-03 00:56:21.633399 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-03 00:56:21.633404 | orchestrator | 2025-05-03 00:56:21.633410 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-03 00:56:21.633416 | orchestrator | Saturday 03 May 2025 00:54:15 +0000 (0:00:01.939) 0:10:49.421 ********** 2025-05-03 00:56:21.633423 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-03 00:56:21.633431 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633437 | orchestrator | 2025-05-03 00:56:21.633443 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-03 00:56:21.633448 | orchestrator | Saturday 03 May 2025 00:54:15 +0000 (0:00:00.397) 0:10:49.819 ********** 2025-05-03 00:56:21.633458 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-03 00:56:21.633465 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-03 00:56:21.633471 | orchestrator | 2025-05-03 00:56:21.633477 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-03 00:56:21.633483 | orchestrator | Saturday 03 May 2025 00:54:22 +0000 (0:00:06.362) 0:10:56.182 ********** 2025-05-03 00:56:21.633492 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-03 00:56:21.633498 | orchestrator | 2025-05-03 00:56:21.633504 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-03 00:56:21.633510 | orchestrator | Saturday 03 May 2025 00:54:25 +0000 (0:00:03.004) 0:10:59.186 ********** 2025-05-03 00:56:21.633516 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.633522 | orchestrator | 2025-05-03 00:56:21.633528 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-03 00:56:21.633533 | orchestrator | Saturday 03 May 2025 00:54:26 +0000 (0:00:00.903) 0:11:00.090 ********** 2025-05-03 00:56:21.633539 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-03 00:56:21.633545 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-03 00:56:21.633551 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-03 00:56:21.633557 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-03 00:56:21.633563 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-03 00:56:21.633568 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-03 00:56:21.633574 | orchestrator | 2025-05-03 00:56:21.633580 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-03 00:56:21.633586 | orchestrator | Saturday 03 May 2025 00:54:27 +0000 (0:00:01.093) 0:11:01.183 ********** 2025-05-03 00:56:21.633592 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:56:21.633597 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.633603 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-03 00:56:21.633609 | orchestrator | 2025-05-03 00:56:21.633615 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-03 00:56:21.633620 | orchestrator | Saturday 03 May 2025 00:54:29 +0000 (0:00:01.857) 0:11:03.041 ********** 2025-05-03 00:56:21.633626 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-03 00:56:21.633632 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.633638 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.633644 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-03 00:56:21.633650 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-03 00:56:21.633656 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.633661 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-03 00:56:21.633667 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-03 00:56:21.633673 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.633679 | orchestrator | 2025-05-03 00:56:21.633685 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-03 00:56:21.633690 | orchestrator | Saturday 03 May 2025 00:54:30 +0000 (0:00:01.161) 0:11:04.202 ********** 2025-05-03 00:56:21.633696 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.633702 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.633708 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.633713 | orchestrator | 2025-05-03 00:56:21.633719 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-03 00:56:21.633725 | orchestrator | Saturday 03 May 2025 00:54:30 +0000 (0:00:00.422) 0:11:04.625 ********** 2025-05-03 00:56:21.633731 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.633737 | orchestrator | 2025-05-03 00:56:21.633743 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-03 00:56:21.633749 | orchestrator | Saturday 03 May 2025 00:54:31 +0000 (0:00:00.550) 0:11:05.175 ********** 2025-05-03 00:56:21.633755 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.633763 | orchestrator | 2025-05-03 00:56:21.633769 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-03 00:56:21.633775 | orchestrator | Saturday 03 May 2025 00:54:31 +0000 (0:00:00.768) 0:11:05.943 ********** 2025-05-03 00:56:21.633781 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.633787 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.633792 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.633798 | orchestrator | 2025-05-03 00:56:21.633804 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-03 00:56:21.633810 | orchestrator | Saturday 03 May 2025 00:54:33 +0000 (0:00:01.223) 0:11:07.166 ********** 2025-05-03 00:56:21.633816 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.633822 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.633827 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.633833 | orchestrator | 2025-05-03 00:56:21.633844 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-03 00:56:21.633851 | orchestrator | Saturday 03 May 2025 00:54:34 +0000 (0:00:01.158) 0:11:08.325 ********** 2025-05-03 00:56:21.633856 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.633862 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.633868 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.633874 | orchestrator | 2025-05-03 00:56:21.633880 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-03 00:56:21.633886 | orchestrator | Saturday 03 May 2025 00:54:37 +0000 (0:00:02.722) 0:11:11.048 ********** 2025-05-03 00:56:21.633891 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.633897 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.633903 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.633909 | orchestrator | 2025-05-03 00:56:21.633915 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-03 00:56:21.633921 | orchestrator | Saturday 03 May 2025 00:54:38 +0000 (0:00:01.905) 0:11:12.954 ********** 2025-05-03 00:56:21.633926 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-03 00:56:21.633932 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-03 00:56:21.633938 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-03 00:56:21.633944 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.633950 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.633956 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.633962 | orchestrator | 2025-05-03 00:56:21.633967 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-03 00:56:21.633973 | orchestrator | Saturday 03 May 2025 00:54:56 +0000 (0:00:17.046) 0:11:30.000 ********** 2025-05-03 00:56:21.633979 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.633985 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.633991 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.633996 | orchestrator | 2025-05-03 00:56:21.634002 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-03 00:56:21.634008 | orchestrator | Saturday 03 May 2025 00:54:56 +0000 (0:00:00.709) 0:11:30.710 ********** 2025-05-03 00:56:21.634031 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.634039 | orchestrator | 2025-05-03 00:56:21.634045 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-03 00:56:21.634050 | orchestrator | Saturday 03 May 2025 00:54:57 +0000 (0:00:00.772) 0:11:31.482 ********** 2025-05-03 00:56:21.634056 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634062 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634068 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634074 | orchestrator | 2025-05-03 00:56:21.634080 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-03 00:56:21.634085 | orchestrator | Saturday 03 May 2025 00:54:57 +0000 (0:00:00.333) 0:11:31.815 ********** 2025-05-03 00:56:21.634096 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.634102 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.634108 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.634114 | orchestrator | 2025-05-03 00:56:21.634119 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-03 00:56:21.634125 | orchestrator | Saturday 03 May 2025 00:54:58 +0000 (0:00:01.141) 0:11:32.957 ********** 2025-05-03 00:56:21.634131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.634137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.634142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.634148 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634154 | orchestrator | 2025-05-03 00:56:21.634160 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-03 00:56:21.634166 | orchestrator | Saturday 03 May 2025 00:54:59 +0000 (0:00:00.860) 0:11:33.818 ********** 2025-05-03 00:56:21.634172 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634178 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634183 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634189 | orchestrator | 2025-05-03 00:56:21.634195 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-03 00:56:21.634201 | orchestrator | Saturday 03 May 2025 00:55:00 +0000 (0:00:00.597) 0:11:34.416 ********** 2025-05-03 00:56:21.634207 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.634212 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.634218 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.634224 | orchestrator | 2025-05-03 00:56:21.634230 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-03 00:56:21.634236 | orchestrator | 2025-05-03 00:56:21.634242 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-03 00:56:21.634247 | orchestrator | Saturday 03 May 2025 00:55:02 +0000 (0:00:02.059) 0:11:36.475 ********** 2025-05-03 00:56:21.634278 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.634287 | orchestrator | 2025-05-03 00:56:21.634293 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-03 00:56:21.634299 | orchestrator | Saturday 03 May 2025 00:55:03 +0000 (0:00:00.715) 0:11:37.190 ********** 2025-05-03 00:56:21.634305 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634310 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634316 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634322 | orchestrator | 2025-05-03 00:56:21.634328 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-03 00:56:21.634334 | orchestrator | Saturday 03 May 2025 00:55:03 +0000 (0:00:00.329) 0:11:37.520 ********** 2025-05-03 00:56:21.634340 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634348 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634354 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634360 | orchestrator | 2025-05-03 00:56:21.634366 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-03 00:56:21.634375 | orchestrator | Saturday 03 May 2025 00:55:04 +0000 (0:00:00.706) 0:11:38.227 ********** 2025-05-03 00:56:21.634381 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634387 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634393 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634399 | orchestrator | 2025-05-03 00:56:21.634408 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-03 00:56:21.634414 | orchestrator | Saturday 03 May 2025 00:55:05 +0000 (0:00:01.065) 0:11:39.293 ********** 2025-05-03 00:56:21.634420 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634425 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634431 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634437 | orchestrator | 2025-05-03 00:56:21.634443 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-03 00:56:21.634453 | orchestrator | Saturday 03 May 2025 00:55:06 +0000 (0:00:00.740) 0:11:40.033 ********** 2025-05-03 00:56:21.634458 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634464 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634470 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634476 | orchestrator | 2025-05-03 00:56:21.634482 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-03 00:56:21.634488 | orchestrator | Saturday 03 May 2025 00:55:06 +0000 (0:00:00.342) 0:11:40.376 ********** 2025-05-03 00:56:21.634493 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634499 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634505 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634511 | orchestrator | 2025-05-03 00:56:21.634517 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-03 00:56:21.634523 | orchestrator | Saturday 03 May 2025 00:55:06 +0000 (0:00:00.350) 0:11:40.727 ********** 2025-05-03 00:56:21.634528 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634534 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634540 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634546 | orchestrator | 2025-05-03 00:56:21.634552 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-03 00:56:21.634558 | orchestrator | Saturday 03 May 2025 00:55:07 +0000 (0:00:00.689) 0:11:41.416 ********** 2025-05-03 00:56:21.634563 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634569 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634575 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634581 | orchestrator | 2025-05-03 00:56:21.634587 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-03 00:56:21.634593 | orchestrator | Saturday 03 May 2025 00:55:07 +0000 (0:00:00.352) 0:11:41.769 ********** 2025-05-03 00:56:21.634598 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634604 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634610 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634616 | orchestrator | 2025-05-03 00:56:21.634622 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-03 00:56:21.634628 | orchestrator | Saturday 03 May 2025 00:55:08 +0000 (0:00:00.322) 0:11:42.091 ********** 2025-05-03 00:56:21.634634 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634639 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634645 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634651 | orchestrator | 2025-05-03 00:56:21.634657 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-03 00:56:21.634663 | orchestrator | Saturday 03 May 2025 00:55:08 +0000 (0:00:00.347) 0:11:42.439 ********** 2025-05-03 00:56:21.634668 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634674 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634679 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634685 | orchestrator | 2025-05-03 00:56:21.634690 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-03 00:56:21.634695 | orchestrator | Saturday 03 May 2025 00:55:09 +0000 (0:00:01.033) 0:11:43.472 ********** 2025-05-03 00:56:21.634700 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634706 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634711 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634716 | orchestrator | 2025-05-03 00:56:21.634721 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-03 00:56:21.634727 | orchestrator | Saturday 03 May 2025 00:55:09 +0000 (0:00:00.317) 0:11:43.790 ********** 2025-05-03 00:56:21.634732 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634737 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634742 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634748 | orchestrator | 2025-05-03 00:56:21.634753 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-03 00:56:21.634762 | orchestrator | Saturday 03 May 2025 00:55:10 +0000 (0:00:00.336) 0:11:44.126 ********** 2025-05-03 00:56:21.634767 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634773 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634778 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634783 | orchestrator | 2025-05-03 00:56:21.634788 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-03 00:56:21.634794 | orchestrator | Saturday 03 May 2025 00:55:10 +0000 (0:00:00.348) 0:11:44.474 ********** 2025-05-03 00:56:21.634799 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634804 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634809 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634814 | orchestrator | 2025-05-03 00:56:21.634820 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-03 00:56:21.634825 | orchestrator | Saturday 03 May 2025 00:55:11 +0000 (0:00:00.609) 0:11:45.083 ********** 2025-05-03 00:56:21.634830 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634835 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634841 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634846 | orchestrator | 2025-05-03 00:56:21.634851 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-03 00:56:21.634856 | orchestrator | Saturday 03 May 2025 00:55:11 +0000 (0:00:00.330) 0:11:45.414 ********** 2025-05-03 00:56:21.634861 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634867 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634872 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634878 | orchestrator | 2025-05-03 00:56:21.634883 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-03 00:56:21.634888 | orchestrator | Saturday 03 May 2025 00:55:11 +0000 (0:00:00.330) 0:11:45.745 ********** 2025-05-03 00:56:21.634896 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634902 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634907 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634912 | orchestrator | 2025-05-03 00:56:21.634918 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-03 00:56:21.634923 | orchestrator | Saturday 03 May 2025 00:55:12 +0000 (0:00:00.303) 0:11:46.049 ********** 2025-05-03 00:56:21.634928 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.634936 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.634941 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.634947 | orchestrator | 2025-05-03 00:56:21.634955 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-03 00:56:21.634960 | orchestrator | Saturday 03 May 2025 00:55:12 +0000 (0:00:00.599) 0:11:46.649 ********** 2025-05-03 00:56:21.634965 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.634971 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.634976 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.634981 | orchestrator | 2025-05-03 00:56:21.634986 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-03 00:56:21.634991 | orchestrator | Saturday 03 May 2025 00:55:13 +0000 (0:00:00.377) 0:11:47.026 ********** 2025-05-03 00:56:21.634997 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635002 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635008 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635013 | orchestrator | 2025-05-03 00:56:21.635018 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-03 00:56:21.635023 | orchestrator | Saturday 03 May 2025 00:55:13 +0000 (0:00:00.352) 0:11:47.378 ********** 2025-05-03 00:56:21.635028 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635034 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635039 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635044 | orchestrator | 2025-05-03 00:56:21.635054 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-03 00:56:21.635059 | orchestrator | Saturday 03 May 2025 00:55:13 +0000 (0:00:00.329) 0:11:47.707 ********** 2025-05-03 00:56:21.635069 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635074 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635080 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635085 | orchestrator | 2025-05-03 00:56:21.635090 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-03 00:56:21.635096 | orchestrator | Saturday 03 May 2025 00:55:14 +0000 (0:00:00.592) 0:11:48.300 ********** 2025-05-03 00:56:21.635101 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635106 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635111 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635117 | orchestrator | 2025-05-03 00:56:21.635122 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-03 00:56:21.635127 | orchestrator | Saturday 03 May 2025 00:55:14 +0000 (0:00:00.340) 0:11:48.641 ********** 2025-05-03 00:56:21.635132 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635138 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635143 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635148 | orchestrator | 2025-05-03 00:56:21.635153 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-03 00:56:21.635159 | orchestrator | Saturday 03 May 2025 00:55:14 +0000 (0:00:00.342) 0:11:48.983 ********** 2025-05-03 00:56:21.635164 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635169 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635175 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635180 | orchestrator | 2025-05-03 00:56:21.635185 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-03 00:56:21.635191 | orchestrator | Saturday 03 May 2025 00:55:15 +0000 (0:00:00.350) 0:11:49.333 ********** 2025-05-03 00:56:21.635196 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635201 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635206 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635212 | orchestrator | 2025-05-03 00:56:21.635217 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-03 00:56:21.635223 | orchestrator | Saturday 03 May 2025 00:55:15 +0000 (0:00:00.630) 0:11:49.964 ********** 2025-05-03 00:56:21.635228 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635233 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635238 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635244 | orchestrator | 2025-05-03 00:56:21.635249 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-03 00:56:21.635264 | orchestrator | Saturday 03 May 2025 00:55:16 +0000 (0:00:00.358) 0:11:50.323 ********** 2025-05-03 00:56:21.635270 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635275 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635281 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635286 | orchestrator | 2025-05-03 00:56:21.635291 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-03 00:56:21.635296 | orchestrator | Saturday 03 May 2025 00:55:16 +0000 (0:00:00.336) 0:11:50.660 ********** 2025-05-03 00:56:21.635302 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635307 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635312 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635318 | orchestrator | 2025-05-03 00:56:21.635323 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-03 00:56:21.635328 | orchestrator | Saturday 03 May 2025 00:55:17 +0000 (0:00:00.349) 0:11:51.009 ********** 2025-05-03 00:56:21.635333 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635339 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635344 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635349 | orchestrator | 2025-05-03 00:56:21.635355 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-03 00:56:21.635360 | orchestrator | Saturday 03 May 2025 00:55:17 +0000 (0:00:00.642) 0:11:51.652 ********** 2025-05-03 00:56:21.635369 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635374 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635379 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635385 | orchestrator | 2025-05-03 00:56:21.635393 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-03 00:56:21.635398 | orchestrator | Saturday 03 May 2025 00:55:17 +0000 (0:00:00.339) 0:11:51.991 ********** 2025-05-03 00:56:21.635404 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.635409 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-03 00:56:21.635414 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635420 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.635425 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-03 00:56:21.635430 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635436 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.635441 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-03 00:56:21.635446 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635451 | orchestrator | 2025-05-03 00:56:21.635457 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-03 00:56:21.635462 | orchestrator | Saturday 03 May 2025 00:55:18 +0000 (0:00:00.399) 0:11:52.391 ********** 2025-05-03 00:56:21.635467 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-03 00:56:21.635475 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-03 00:56:21.635480 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635485 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-03 00:56:21.635491 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-03 00:56:21.635496 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635501 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-03 00:56:21.635506 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-03 00:56:21.635511 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635517 | orchestrator | 2025-05-03 00:56:21.635522 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-03 00:56:21.635527 | orchestrator | Saturday 03 May 2025 00:55:18 +0000 (0:00:00.377) 0:11:52.769 ********** 2025-05-03 00:56:21.635532 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635537 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635543 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635548 | orchestrator | 2025-05-03 00:56:21.635553 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-03 00:56:21.635558 | orchestrator | Saturday 03 May 2025 00:55:19 +0000 (0:00:00.618) 0:11:53.387 ********** 2025-05-03 00:56:21.635563 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635571 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635576 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635582 | orchestrator | 2025-05-03 00:56:21.635589 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:56:21.635595 | orchestrator | Saturday 03 May 2025 00:55:19 +0000 (0:00:00.344) 0:11:53.732 ********** 2025-05-03 00:56:21.635600 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635605 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635610 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635615 | orchestrator | 2025-05-03 00:56:21.635621 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:56:21.635626 | orchestrator | Saturday 03 May 2025 00:55:20 +0000 (0:00:00.379) 0:11:54.112 ********** 2025-05-03 00:56:21.635631 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635636 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635642 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635651 | orchestrator | 2025-05-03 00:56:21.635656 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:56:21.635661 | orchestrator | Saturday 03 May 2025 00:55:20 +0000 (0:00:00.390) 0:11:54.502 ********** 2025-05-03 00:56:21.635667 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635672 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635677 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635682 | orchestrator | 2025-05-03 00:56:21.635687 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:56:21.635693 | orchestrator | Saturday 03 May 2025 00:55:21 +0000 (0:00:00.628) 0:11:55.131 ********** 2025-05-03 00:56:21.635698 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635703 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635708 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635713 | orchestrator | 2025-05-03 00:56:21.635719 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:56:21.635724 | orchestrator | Saturday 03 May 2025 00:55:21 +0000 (0:00:00.330) 0:11:55.462 ********** 2025-05-03 00:56:21.635729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.635734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.635740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.635745 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635750 | orchestrator | 2025-05-03 00:56:21.635755 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:56:21.635760 | orchestrator | Saturday 03 May 2025 00:55:21 +0000 (0:00:00.495) 0:11:55.957 ********** 2025-05-03 00:56:21.635766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.635771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.635776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.635781 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635787 | orchestrator | 2025-05-03 00:56:21.635792 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:56:21.635797 | orchestrator | Saturday 03 May 2025 00:55:22 +0000 (0:00:00.484) 0:11:56.441 ********** 2025-05-03 00:56:21.635802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.635808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.635813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.635821 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635826 | orchestrator | 2025-05-03 00:56:21.635832 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.635837 | orchestrator | Saturday 03 May 2025 00:55:22 +0000 (0:00:00.477) 0:11:56.918 ********** 2025-05-03 00:56:21.635842 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635847 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635852 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635858 | orchestrator | 2025-05-03 00:56:21.635863 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:56:21.635868 | orchestrator | Saturday 03 May 2025 00:55:23 +0000 (0:00:00.335) 0:11:57.254 ********** 2025-05-03 00:56:21.635873 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.635879 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635884 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.635889 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635894 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.635900 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635905 | orchestrator | 2025-05-03 00:56:21.635910 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:56:21.635915 | orchestrator | Saturday 03 May 2025 00:55:23 +0000 (0:00:00.740) 0:11:57.994 ********** 2025-05-03 00:56:21.635923 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635928 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635934 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635939 | orchestrator | 2025-05-03 00:56:21.635944 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:56:21.635949 | orchestrator | Saturday 03 May 2025 00:55:24 +0000 (0:00:00.355) 0:11:58.349 ********** 2025-05-03 00:56:21.635955 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635960 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.635965 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.635970 | orchestrator | 2025-05-03 00:56:21.635975 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:56:21.635981 | orchestrator | Saturday 03 May 2025 00:55:24 +0000 (0:00:00.379) 0:11:58.729 ********** 2025-05-03 00:56:21.635986 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:56:21.635991 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.635997 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:56:21.636002 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636007 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:56:21.636013 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636018 | orchestrator | 2025-05-03 00:56:21.636023 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:56:21.636028 | orchestrator | Saturday 03 May 2025 00:55:25 +0000 (0:00:00.507) 0:11:59.236 ********** 2025-05-03 00:56:21.636034 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.636042 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636047 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.636052 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636058 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-03 00:56:21.636063 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636068 | orchestrator | 2025-05-03 00:56:21.636073 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:56:21.636079 | orchestrator | Saturday 03 May 2025 00:55:25 +0000 (0:00:00.627) 0:11:59.863 ********** 2025-05-03 00:56:21.636084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.636089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.636094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.636100 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:56:21.636110 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:56:21.636115 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:56:21.636121 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636126 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:56:21.636131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:56:21.636136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:56:21.636142 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636147 | orchestrator | 2025-05-03 00:56:21.636152 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-03 00:56:21.636157 | orchestrator | Saturday 03 May 2025 00:55:26 +0000 (0:00:00.640) 0:12:00.504 ********** 2025-05-03 00:56:21.636162 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636168 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636173 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636178 | orchestrator | 2025-05-03 00:56:21.636184 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-03 00:56:21.636192 | orchestrator | Saturday 03 May 2025 00:55:27 +0000 (0:00:00.912) 0:12:01.417 ********** 2025-05-03 00:56:21.636197 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.636202 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636208 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-03 00:56:21.636213 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636218 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-03 00:56:21.636223 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636229 | orchestrator | 2025-05-03 00:56:21.636234 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-03 00:56:21.636242 | orchestrator | Saturday 03 May 2025 00:55:28 +0000 (0:00:00.584) 0:12:02.001 ********** 2025-05-03 00:56:21.636248 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636263 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636268 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636274 | orchestrator | 2025-05-03 00:56:21.636282 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-03 00:56:21.636287 | orchestrator | Saturday 03 May 2025 00:55:28 +0000 (0:00:00.843) 0:12:02.845 ********** 2025-05-03 00:56:21.636292 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636298 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636303 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636308 | orchestrator | 2025-05-03 00:56:21.636313 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-03 00:56:21.636319 | orchestrator | Saturday 03 May 2025 00:55:29 +0000 (0:00:00.533) 0:12:03.379 ********** 2025-05-03 00:56:21.636324 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.636329 | orchestrator | 2025-05-03 00:56:21.636335 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-03 00:56:21.636340 | orchestrator | Saturday 03 May 2025 00:55:30 +0000 (0:00:00.783) 0:12:04.163 ********** 2025-05-03 00:56:21.636345 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-03 00:56:21.636350 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-03 00:56:21.636356 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-03 00:56:21.636361 | orchestrator | 2025-05-03 00:56:21.636366 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-03 00:56:21.636371 | orchestrator | Saturday 03 May 2025 00:55:30 +0000 (0:00:00.713) 0:12:04.876 ********** 2025-05-03 00:56:21.636376 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:56:21.636382 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.636387 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-03 00:56:21.636392 | orchestrator | 2025-05-03 00:56:21.636397 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-03 00:56:21.636403 | orchestrator | Saturday 03 May 2025 00:55:32 +0000 (0:00:01.767) 0:12:06.644 ********** 2025-05-03 00:56:21.636408 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-03 00:56:21.636413 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-03 00:56:21.636418 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.636424 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-03 00:56:21.636429 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-03 00:56:21.636434 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.636440 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-03 00:56:21.636445 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-03 00:56:21.636451 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.636456 | orchestrator | 2025-05-03 00:56:21.636461 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-03 00:56:21.636467 | orchestrator | Saturday 03 May 2025 00:55:33 +0000 (0:00:01.218) 0:12:07.862 ********** 2025-05-03 00:56:21.636475 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636481 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636486 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636491 | orchestrator | 2025-05-03 00:56:21.636496 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-03 00:56:21.636502 | orchestrator | Saturday 03 May 2025 00:55:34 +0000 (0:00:00.586) 0:12:08.448 ********** 2025-05-03 00:56:21.636507 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636512 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636517 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636523 | orchestrator | 2025-05-03 00:56:21.636528 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-03 00:56:21.636533 | orchestrator | Saturday 03 May 2025 00:55:34 +0000 (0:00:00.335) 0:12:08.784 ********** 2025-05-03 00:56:21.636538 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-03 00:56:21.636544 | orchestrator | 2025-05-03 00:56:21.636549 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-03 00:56:21.636554 | orchestrator | Saturday 03 May 2025 00:55:35 +0000 (0:00:00.233) 0:12:09.017 ********** 2025-05-03 00:56:21.636559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636589 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636594 | orchestrator | 2025-05-03 00:56:21.636600 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-03 00:56:21.636605 | orchestrator | Saturday 03 May 2025 00:55:36 +0000 (0:00:00.991) 0:12:10.008 ********** 2025-05-03 00:56:21.636610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636642 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636647 | orchestrator | 2025-05-03 00:56:21.636652 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-03 00:56:21.636658 | orchestrator | Saturday 03 May 2025 00:55:37 +0000 (0:00:01.005) 0:12:11.013 ********** 2025-05-03 00:56:21.636663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-03 00:56:21.636703 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636708 | orchestrator | 2025-05-03 00:56:21.636713 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-03 00:56:21.636719 | orchestrator | Saturday 03 May 2025 00:55:37 +0000 (0:00:00.682) 0:12:11.696 ********** 2025-05-03 00:56:21.636724 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-03 00:56:21.636730 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-03 00:56:21.636735 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-03 00:56:21.636740 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-03 00:56:21.636746 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-03 00:56:21.636751 | orchestrator | 2025-05-03 00:56:21.636756 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-03 00:56:21.636761 | orchestrator | Saturday 03 May 2025 00:56:03 +0000 (0:00:25.785) 0:12:37.482 ********** 2025-05-03 00:56:21.636767 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636772 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636777 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636783 | orchestrator | 2025-05-03 00:56:21.636788 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-03 00:56:21.636793 | orchestrator | Saturday 03 May 2025 00:56:03 +0000 (0:00:00.477) 0:12:37.960 ********** 2025-05-03 00:56:21.636798 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.636803 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.636809 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.636814 | orchestrator | 2025-05-03 00:56:21.636819 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-03 00:56:21.636824 | orchestrator | Saturday 03 May 2025 00:56:04 +0000 (0:00:00.327) 0:12:38.287 ********** 2025-05-03 00:56:21.636830 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.636835 | orchestrator | 2025-05-03 00:56:21.636842 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-03 00:56:21.636848 | orchestrator | Saturday 03 May 2025 00:56:04 +0000 (0:00:00.552) 0:12:38.839 ********** 2025-05-03 00:56:21.636853 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.636858 | orchestrator | 2025-05-03 00:56:21.636863 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-03 00:56:21.636869 | orchestrator | Saturday 03 May 2025 00:56:05 +0000 (0:00:00.777) 0:12:39.617 ********** 2025-05-03 00:56:21.636874 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.636879 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.636884 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.636890 | orchestrator | 2025-05-03 00:56:21.636895 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-03 00:56:21.636900 | orchestrator | Saturday 03 May 2025 00:56:06 +0000 (0:00:01.180) 0:12:40.797 ********** 2025-05-03 00:56:21.636910 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.636915 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.636920 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.636926 | orchestrator | 2025-05-03 00:56:21.636933 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-03 00:56:21.636939 | orchestrator | Saturday 03 May 2025 00:56:07 +0000 (0:00:01.161) 0:12:41.959 ********** 2025-05-03 00:56:21.636944 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.636949 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.636954 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.636960 | orchestrator | 2025-05-03 00:56:21.636965 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-03 00:56:21.636970 | orchestrator | Saturday 03 May 2025 00:56:09 +0000 (0:00:01.967) 0:12:43.926 ********** 2025-05-03 00:56:21.636975 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.636981 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.636986 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-03 00:56:21.636991 | orchestrator | 2025-05-03 00:56:21.636997 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-03 00:56:21.637002 | orchestrator | Saturday 03 May 2025 00:56:11 +0000 (0:00:01.887) 0:12:45.813 ********** 2025-05-03 00:56:21.637007 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.637012 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:56:21.637018 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:56:21.637023 | orchestrator | 2025-05-03 00:56:21.637028 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-03 00:56:21.637034 | orchestrator | Saturday 03 May 2025 00:56:13 +0000 (0:00:01.259) 0:12:47.073 ********** 2025-05-03 00:56:21.637039 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.637044 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.637049 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.637054 | orchestrator | 2025-05-03 00:56:21.637060 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-03 00:56:21.637065 | orchestrator | Saturday 03 May 2025 00:56:13 +0000 (0:00:00.695) 0:12:47.768 ********** 2025-05-03 00:56:21.637070 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:56:21.637076 | orchestrator | 2025-05-03 00:56:21.637081 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-03 00:56:21.637086 | orchestrator | Saturday 03 May 2025 00:56:14 +0000 (0:00:00.903) 0:12:48.673 ********** 2025-05-03 00:56:21.637091 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.637097 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.637102 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.637107 | orchestrator | 2025-05-03 00:56:21.637112 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-03 00:56:21.637118 | orchestrator | Saturday 03 May 2025 00:56:15 +0000 (0:00:00.386) 0:12:49.059 ********** 2025-05-03 00:56:21.637123 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.637128 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.637133 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.637139 | orchestrator | 2025-05-03 00:56:21.637144 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-03 00:56:21.637149 | orchestrator | Saturday 03 May 2025 00:56:16 +0000 (0:00:01.281) 0:12:50.341 ********** 2025-05-03 00:56:21.637154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:56:21.637160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:56:21.637165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:56:21.637173 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:56:21.637178 | orchestrator | 2025-05-03 00:56:21.637184 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-03 00:56:21.637189 | orchestrator | Saturday 03 May 2025 00:56:17 +0000 (0:00:00.925) 0:12:51.267 ********** 2025-05-03 00:56:21.637194 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:56:21.637200 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:56:21.637205 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:56:21.637210 | orchestrator | 2025-05-03 00:56:21.637215 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-03 00:56:21.637220 | orchestrator | Saturday 03 May 2025 00:56:17 +0000 (0:00:00.348) 0:12:51.615 ********** 2025-05-03 00:56:21.637226 | orchestrator | changed: [testbed-node-3] 2025-05-03 00:56:21.637231 | orchestrator | changed: [testbed-node-4] 2025-05-03 00:56:21.637236 | orchestrator | changed: [testbed-node-5] 2025-05-03 00:56:21.637241 | orchestrator | 2025-05-03 00:56:21.637247 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:56:21.637273 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-03 00:56:21.637280 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-03 00:56:21.637285 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-03 00:56:21.637291 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-03 00:56:21.637296 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-03 00:56:21.637304 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-03 00:56:24.649482 | orchestrator | 2025-05-03 00:56:24.649599 | orchestrator | 2025-05-03 00:56:24.649618 | orchestrator | 2025-05-03 00:56:24.649633 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:56:24.649649 | orchestrator | Saturday 03 May 2025 00:56:18 +0000 (0:00:01.280) 0:12:52.895 ********** 2025-05-03 00:56:24.649663 | orchestrator | =============================================================================== 2025-05-03 00:56:24.649677 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 45.31s 2025-05-03 00:56:24.649814 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 42.01s 2025-05-03 00:56:24.649857 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 25.79s 2025-05-03 00:56:24.649872 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.46s 2025-05-03 00:56:24.649886 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.05s 2025-05-03 00:56:24.649900 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.55s 2025-05-03 00:56:24.649914 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.45s 2025-05-03 00:56:24.649928 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.77s 2025-05-03 00:56:24.649942 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.63s 2025-05-03 00:56:24.649955 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 7.17s 2025-05-03 00:56:24.649969 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.36s 2025-05-03 00:56:24.649983 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.04s 2025-05-03 00:56:24.649997 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.87s 2025-05-03 00:56:24.650087 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.68s 2025-05-03 00:56:24.650104 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.30s 2025-05-03 00:56:24.650118 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.79s 2025-05-03 00:56:24.650132 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 3.70s 2025-05-03 00:56:24.650146 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.31s 2025-05-03 00:56:24.650160 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.25s 2025-05-03 00:56:24.650174 | orchestrator | ceph-facts : find a running mon container ------------------------------- 3.12s 2025-05-03 00:56:24.650187 | orchestrator | 2025-05-03 00:56:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:24.650202 | orchestrator | 2025-05-03 00:56:21 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:24.650217 | orchestrator | 2025-05-03 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:24.650272 | orchestrator | 2025-05-03 00:56:24 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:24.652016 | orchestrator | 2025-05-03 00:56:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:24.652049 | orchestrator | 2025-05-03 00:56:24 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:27.708620 | orchestrator | 2025-05-03 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:27.708764 | orchestrator | 2025-05-03 00:56:27 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:27.709690 | orchestrator | 2025-05-03 00:56:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:27.710923 | orchestrator | 2025-05-03 00:56:27 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state STARTED 2025-05-03 00:56:27.711222 | orchestrator | 2025-05-03 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:30.772740 | orchestrator | 2025-05-03 00:56:30 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:30.773001 | orchestrator | 2025-05-03 00:56:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:30.774886 | orchestrator | 2025-05-03 00:56:30 | INFO  | Task 46ec6864-7226-4456-835f-60f71dff2c45 is in state SUCCESS 2025-05-03 00:56:30.776543 | orchestrator | 2025-05-03 00:56:30.776652 | orchestrator | 2025-05-03 00:56:30.776672 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-03 00:56:30.776689 | orchestrator | 2025-05-03 00:56:30.776703 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-03 00:56:30.776718 | orchestrator | Saturday 03 May 2025 00:53:07 +0000 (0:00:00.102) 0:00:00.102 ********** 2025-05-03 00:56:30.776732 | orchestrator | ok: [localhost] => { 2025-05-03 00:56:30.776748 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-03 00:56:30.776762 | orchestrator | } 2025-05-03 00:56:30.776777 | orchestrator | 2025-05-03 00:56:30.776791 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-03 00:56:30.776805 | orchestrator | Saturday 03 May 2025 00:53:07 +0000 (0:00:00.031) 0:00:00.134 ********** 2025-05-03 00:56:30.776819 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-03 00:56:30.776834 | orchestrator | ...ignoring 2025-05-03 00:56:30.776848 | orchestrator | 2025-05-03 00:56:30.776862 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-03 00:56:30.776876 | orchestrator | Saturday 03 May 2025 00:53:09 +0000 (0:00:02.383) 0:00:02.518 ********** 2025-05-03 00:56:30.776916 | orchestrator | skipping: [localhost] 2025-05-03 00:56:30.776930 | orchestrator | 2025-05-03 00:56:30.776944 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-03 00:56:30.776958 | orchestrator | Saturday 03 May 2025 00:53:09 +0000 (0:00:00.039) 0:00:02.557 ********** 2025-05-03 00:56:30.776972 | orchestrator | ok: [localhost] 2025-05-03 00:56:30.776986 | orchestrator | 2025-05-03 00:56:30.777000 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:56:30.777013 | orchestrator | 2025-05-03 00:56:30.777027 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:56:30.777042 | orchestrator | Saturday 03 May 2025 00:53:09 +0000 (0:00:00.115) 0:00:02.672 ********** 2025-05-03 00:56:30.777056 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.777072 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:30.777089 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:30.777105 | orchestrator | 2025-05-03 00:56:30.777122 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:56:30.777138 | orchestrator | Saturday 03 May 2025 00:53:09 +0000 (0:00:00.331) 0:00:03.004 ********** 2025-05-03 00:56:30.777154 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-03 00:56:30.777190 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-03 00:56:30.777207 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-03 00:56:30.777223 | orchestrator | 2025-05-03 00:56:30.777239 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-03 00:56:30.777280 | orchestrator | 2025-05-03 00:56:30.777296 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-03 00:56:30.777313 | orchestrator | Saturday 03 May 2025 00:53:10 +0000 (0:00:00.392) 0:00:03.396 ********** 2025-05-03 00:56:30.777329 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:56:30.777345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-03 00:56:30.777360 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-03 00:56:30.777375 | orchestrator | 2025-05-03 00:56:30.777391 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-03 00:56:30.777408 | orchestrator | Saturday 03 May 2025 00:53:10 +0000 (0:00:00.465) 0:00:03.861 ********** 2025-05-03 00:56:30.777423 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:30.777439 | orchestrator | 2025-05-03 00:56:30.777452 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-03 00:56:30.777466 | orchestrator | Saturday 03 May 2025 00:53:11 +0000 (0:00:00.648) 0:00:04.510 ********** 2025-05-03 00:56:30.777501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.777531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.777555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.777582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.777599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.777614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.777628 | orchestrator | 2025-05-03 00:56:30.777643 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-03 00:56:30.777657 | orchestrator | Saturday 03 May 2025 00:53:14 +0000 (0:00:03.410) 0:00:07.920 ********** 2025-05-03 00:56:30.777671 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.777692 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.777706 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.777720 | orchestrator | 2025-05-03 00:56:30.777734 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-03 00:56:30.777749 | orchestrator | Saturday 03 May 2025 00:53:15 +0000 (0:00:00.951) 0:00:08.872 ********** 2025-05-03 00:56:30.777762 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.777776 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.777790 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.777804 | orchestrator | 2025-05-03 00:56:30.777817 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-03 00:56:30.777831 | orchestrator | Saturday 03 May 2025 00:53:17 +0000 (0:00:01.579) 0:00:10.451 ********** 2025-05-03 00:56:30.777853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.777877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.777894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.777923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.777940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.777955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.777970 | orchestrator | 2025-05-03 00:56:30.777984 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-03 00:56:30.777998 | orchestrator | Saturday 03 May 2025 00:53:23 +0000 (0:00:05.729) 0:00:16.180 ********** 2025-05-03 00:56:30.778012 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.778089 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.778103 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.778117 | orchestrator | 2025-05-03 00:56:30.778131 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-03 00:56:30.778145 | orchestrator | Saturday 03 May 2025 00:53:24 +0000 (0:00:01.114) 0:00:17.295 ********** 2025-05-03 00:56:30.778159 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:30.778173 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:30.778187 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.778200 | orchestrator | 2025-05-03 00:56:30.778214 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-03 00:56:30.778228 | orchestrator | Saturday 03 May 2025 00:53:30 +0000 (0:00:06.018) 0:00:23.313 ********** 2025-05-03 00:56:30.778277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.778305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.778322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-03 00:56:30.778351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.778368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.778383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-03 00:56:30.778398 | orchestrator | 2025-05-03 00:56:30.778412 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-03 00:56:30.778426 | orchestrator | Saturday 03 May 2025 00:53:33 +0000 (0:00:03.595) 0:00:26.909 ********** 2025-05-03 00:56:30.778440 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.778454 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:30.778468 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:30.778482 | orchestrator | 2025-05-03 00:56:30.778497 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-03 00:56:30.778511 | orchestrator | Saturday 03 May 2025 00:53:34 +0000 (0:00:01.024) 0:00:27.933 ********** 2025-05-03 00:56:30.778525 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.778539 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:30.778560 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:30.778574 | orchestrator | 2025-05-03 00:56:30.778588 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-03 00:56:30.778602 | orchestrator | Saturday 03 May 2025 00:53:35 +0000 (0:00:00.374) 0:00:28.308 ********** 2025-05-03 00:56:30.778615 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.778629 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:30.778643 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:30.778657 | orchestrator | 2025-05-03 00:56:30.778671 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-03 00:56:30.778685 | orchestrator | Saturday 03 May 2025 00:53:35 +0000 (0:00:00.534) 0:00:28.842 ********** 2025-05-03 00:56:30.778700 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-03 00:56:30.778714 | orchestrator | ...ignoring 2025-05-03 00:56:30.778728 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-03 00:56:30.778743 | orchestrator | ...ignoring 2025-05-03 00:56:30.778757 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-03 00:56:30.778771 | orchestrator | ...ignoring 2025-05-03 00:56:30.778785 | orchestrator | 2025-05-03 00:56:30.778799 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-03 00:56:30.778813 | orchestrator | Saturday 03 May 2025 00:53:46 +0000 (0:00:10.896) 0:00:39.738 ********** 2025-05-03 00:56:30.778826 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.778840 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:30.778854 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:30.778868 | orchestrator | 2025-05-03 00:56:30.778882 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-03 00:56:30.778895 | orchestrator | Saturday 03 May 2025 00:53:47 +0000 (0:00:00.676) 0:00:40.415 ********** 2025-05-03 00:56:30.778910 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:30.778924 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.778938 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.778952 | orchestrator | 2025-05-03 00:56:30.778971 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-03 00:56:30.778986 | orchestrator | Saturday 03 May 2025 00:53:48 +0000 (0:00:00.754) 0:00:41.170 ********** 2025-05-03 00:56:30.779000 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:30.779014 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.779028 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.779042 | orchestrator | 2025-05-03 00:56:30.779062 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-03 00:56:30.779076 | orchestrator | Saturday 03 May 2025 00:53:48 +0000 (0:00:00.536) 0:00:41.706 ********** 2025-05-03 00:56:30.779091 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:30.779105 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.779119 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.779133 | orchestrator | 2025-05-03 00:56:30.779148 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-03 00:56:30.779161 | orchestrator | Saturday 03 May 2025 00:53:49 +0000 (0:00:00.742) 0:00:42.448 ********** 2025-05-03 00:56:30.779175 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.779189 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:30.779203 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:30.779218 | orchestrator | 2025-05-03 00:56:30.779233 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-03 00:56:30.779276 | orchestrator | Saturday 03 May 2025 00:53:50 +0000 (0:00:00.658) 0:00:43.106 ********** 2025-05-03 00:56:30.779291 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:30.779306 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.779329 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.779343 | orchestrator | 2025-05-03 00:56:30.779363 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-03 00:56:30.779377 | orchestrator | Saturday 03 May 2025 00:53:50 +0000 (0:00:00.678) 0:00:43.784 ********** 2025-05-03 00:56:30.779391 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.779405 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.779419 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-03 00:56:30.779433 | orchestrator | 2025-05-03 00:56:30.779447 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-03 00:56:30.779460 | orchestrator | Saturday 03 May 2025 00:53:51 +0000 (0:00:00.634) 0:00:44.419 ********** 2025-05-03 00:56:30.779474 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.779488 | orchestrator | 2025-05-03 00:56:30.779502 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-03 00:56:30.779515 | orchestrator | Saturday 03 May 2025 00:54:01 +0000 (0:00:10.298) 0:00:54.717 ********** 2025-05-03 00:56:30.779529 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.779543 | orchestrator | 2025-05-03 00:56:30.779557 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-03 00:56:30.779571 | orchestrator | Saturday 03 May 2025 00:54:01 +0000 (0:00:00.138) 0:00:54.855 ********** 2025-05-03 00:56:30.779584 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:30.779598 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.779612 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.779626 | orchestrator | 2025-05-03 00:56:30.779640 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-03 00:56:30.779654 | orchestrator | Saturday 03 May 2025 00:54:02 +0000 (0:00:01.025) 0:00:55.881 ********** 2025-05-03 00:56:30.779668 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.779682 | orchestrator | 2025-05-03 00:56:30.779696 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-03 00:56:30.779710 | orchestrator | Saturday 03 May 2025 00:54:11 +0000 (0:00:08.395) 0:01:04.276 ********** 2025-05-03 00:56:30.779724 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.779738 | orchestrator | 2025-05-03 00:56:30.779751 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-03 00:56:30.779765 | orchestrator | Saturday 03 May 2025 00:54:12 +0000 (0:00:01.559) 0:01:05.836 ********** 2025-05-03 00:56:30.779778 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.779793 | orchestrator | 2025-05-03 00:56:30.779807 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-03 00:56:30.779821 | orchestrator | Saturday 03 May 2025 00:54:15 +0000 (0:00:02.287) 0:01:08.123 ********** 2025-05-03 00:56:30.779835 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.779849 | orchestrator | 2025-05-03 00:56:30.779863 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-03 00:56:30.779876 | orchestrator | Saturday 03 May 2025 00:54:15 +0000 (0:00:00.118) 0:01:08.242 ********** 2025-05-03 00:56:30.779890 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:30.779905 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.779930 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.779945 | orchestrator | 2025-05-03 00:56:30.779959 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-03 00:56:30.779974 | orchestrator | Saturday 03 May 2025 00:54:15 +0000 (0:00:00.538) 0:01:08.781 ********** 2025-05-03 00:56:30.779987 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:30.780001 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:30.780015 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:30.780029 | orchestrator | 2025-05-03 00:56:30.780043 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-03 00:56:30.780057 | orchestrator | Saturday 03 May 2025 00:54:16 +0000 (0:00:00.567) 0:01:09.348 ********** 2025-05-03 00:56:30.780071 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-03 00:56:30.780091 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.780105 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:30.780120 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:30.780133 | orchestrator | 2025-05-03 00:56:30.780153 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-03 00:56:30.780167 | orchestrator | skipping: no hosts matched 2025-05-03 00:56:30.780181 | orchestrator | 2025-05-03 00:56:30.780195 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-03 00:56:30.780209 | orchestrator | 2025-05-03 00:56:30.780223 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-03 00:56:30.780237 | orchestrator | Saturday 03 May 2025 00:54:35 +0000 (0:00:19.460) 0:01:28.809 ********** 2025-05-03 00:56:30.780267 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:56:30.780282 | orchestrator | 2025-05-03 00:56:30.780297 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-03 00:56:30.780310 | orchestrator | Saturday 03 May 2025 00:54:58 +0000 (0:00:22.308) 0:01:51.117 ********** 2025-05-03 00:56:30.780331 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:30.780345 | orchestrator | 2025-05-03 00:56:30.780360 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-03 00:56:30.780373 | orchestrator | Saturday 03 May 2025 00:55:13 +0000 (0:00:15.548) 0:02:06.666 ********** 2025-05-03 00:56:30.780387 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:30.780401 | orchestrator | 2025-05-03 00:56:30.780415 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-03 00:56:30.780429 | orchestrator | 2025-05-03 00:56:30.780443 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-03 00:56:30.780457 | orchestrator | Saturday 03 May 2025 00:55:16 +0000 (0:00:02.696) 0:02:09.362 ********** 2025-05-03 00:56:30.780471 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:56:30.780485 | orchestrator | 2025-05-03 00:56:30.780498 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-03 00:56:30.780512 | orchestrator | Saturday 03 May 2025 00:55:31 +0000 (0:00:15.651) 0:02:25.014 ********** 2025-05-03 00:56:30.780526 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:30.780547 | orchestrator | 2025-05-03 00:56:30.780570 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-03 00:56:30.780594 | orchestrator | Saturday 03 May 2025 00:55:52 +0000 (0:00:20.544) 0:02:45.558 ********** 2025-05-03 00:56:30.780618 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:30.780642 | orchestrator | 2025-05-03 00:56:30.780662 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-03 00:56:30.780675 | orchestrator | 2025-05-03 00:56:30.780689 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-03 00:56:30.780703 | orchestrator | Saturday 03 May 2025 00:55:55 +0000 (0:00:02.692) 0:02:48.250 ********** 2025-05-03 00:56:30.780718 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.780732 | orchestrator | 2025-05-03 00:56:30.780746 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-03 00:56:30.780759 | orchestrator | Saturday 03 May 2025 00:56:12 +0000 (0:00:17.615) 0:03:05.866 ********** 2025-05-03 00:56:30.780773 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.780787 | orchestrator | 2025-05-03 00:56:30.780801 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-03 00:56:30.780815 | orchestrator | Saturday 03 May 2025 00:56:13 +0000 (0:00:00.531) 0:03:06.397 ********** 2025-05-03 00:56:30.780829 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.780843 | orchestrator | 2025-05-03 00:56:30.780857 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-03 00:56:30.780871 | orchestrator | 2025-05-03 00:56:30.780885 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-03 00:56:30.780899 | orchestrator | Saturday 03 May 2025 00:56:15 +0000 (0:00:02.542) 0:03:08.939 ********** 2025-05-03 00:56:30.780922 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:56:30.780936 | orchestrator | 2025-05-03 00:56:30.780950 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-03 00:56:30.780964 | orchestrator | Saturday 03 May 2025 00:56:16 +0000 (0:00:00.783) 0:03:09.723 ********** 2025-05-03 00:56:30.780978 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.780993 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.781007 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.781022 | orchestrator | 2025-05-03 00:56:30.781035 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-03 00:56:30.781050 | orchestrator | Saturday 03 May 2025 00:56:19 +0000 (0:00:02.765) 0:03:12.488 ********** 2025-05-03 00:56:30.781063 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.781077 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.781092 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.781106 | orchestrator | 2025-05-03 00:56:30.781120 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-03 00:56:30.781134 | orchestrator | Saturday 03 May 2025 00:56:21 +0000 (0:00:02.176) 0:03:14.665 ********** 2025-05-03 00:56:30.781147 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.781162 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.781175 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.781189 | orchestrator | 2025-05-03 00:56:30.781209 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-03 00:56:30.781224 | orchestrator | Saturday 03 May 2025 00:56:23 +0000 (0:00:02.299) 0:03:16.964 ********** 2025-05-03 00:56:30.781238 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.781274 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.781289 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:56:30.781303 | orchestrator | 2025-05-03 00:56:30.781317 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-03 00:56:30.781331 | orchestrator | Saturday 03 May 2025 00:56:26 +0000 (0:00:02.172) 0:03:19.137 ********** 2025-05-03 00:56:30.781344 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:56:30.781358 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:56:30.781372 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:56:30.781386 | orchestrator | 2025-05-03 00:56:30.781400 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-03 00:56:30.781414 | orchestrator | Saturday 03 May 2025 00:56:29 +0000 (0:00:03.890) 0:03:23.027 ********** 2025-05-03 00:56:30.781427 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:56:30.781441 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:56:30.781455 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:56:30.781469 | orchestrator | 2025-05-03 00:56:30.781483 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:56:30.781497 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-03 00:56:30.781511 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-03 00:56:30.781534 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-03 00:56:33.835176 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-03 00:56:33.835391 | orchestrator | 2025-05-03 00:56:33.835428 | orchestrator | 2025-05-03 00:56:33.835457 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:56:33.835485 | orchestrator | Saturday 03 May 2025 00:56:30 +0000 (0:00:00.362) 0:03:23.390 ********** 2025-05-03 00:56:33.835511 | orchestrator | =============================================================================== 2025-05-03 00:56:33.835573 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.96s 2025-05-03 00:56:33.835602 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.09s 2025-05-03 00:56:33.835627 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 19.46s 2025-05-03 00:56:33.835652 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.62s 2025-05-03 00:56:33.835678 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2025-05-03 00:56:33.835704 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.30s 2025-05-03 00:56:33.835731 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.40s 2025-05-03 00:56:33.835760 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 6.02s 2025-05-03 00:56:33.835796 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.73s 2025-05-03 00:56:33.835834 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.39s 2025-05-03 00:56:33.835861 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.89s 2025-05-03 00:56:33.835886 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.60s 2025-05-03 00:56:33.835911 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.41s 2025-05-03 00:56:33.835935 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.77s 2025-05-03 00:56:33.835958 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.54s 2025-05-03 00:56:33.835982 | orchestrator | Check MariaDB service --------------------------------------------------- 2.38s 2025-05-03 00:56:33.836006 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.30s 2025-05-03 00:56:33.836029 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.29s 2025-05-03 00:56:33.836053 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.18s 2025-05-03 00:56:33.836076 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.17s 2025-05-03 00:56:33.836100 | orchestrator | 2025-05-03 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:33.836146 | orchestrator | 2025-05-03 00:56:33 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:33.836615 | orchestrator | 2025-05-03 00:56:33 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:33.837262 | orchestrator | 2025-05-03 00:56:33 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:56:33.839427 | orchestrator | 2025-05-03 00:56:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:33.839964 | orchestrator | 2025-05-03 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:36.886741 | orchestrator | 2025-05-03 00:56:36 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:36.888008 | orchestrator | 2025-05-03 00:56:36 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:36.895226 | orchestrator | 2025-05-03 00:56:36 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:56:36.896379 | orchestrator | 2025-05-03 00:56:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:39.948381 | orchestrator | 2025-05-03 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:39.948525 | orchestrator | 2025-05-03 00:56:39 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:39.948869 | orchestrator | 2025-05-03 00:56:39 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:39.949639 | orchestrator | 2025-05-03 00:56:39 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:56:39.950459 | orchestrator | 2025-05-03 00:56:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:42.986842 | orchestrator | 2025-05-03 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:42.987085 | orchestrator | 2025-05-03 00:56:42 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:42.987662 | orchestrator | 2025-05-03 00:56:42 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:42.987692 | orchestrator | 2025-05-03 00:56:42 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:56:42.987715 | orchestrator | 2025-05-03 00:56:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:42.988673 | orchestrator | 2025-05-03 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:46.036915 | orchestrator | 2025-05-03 00:56:46 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:46.037079 | orchestrator | 2025-05-03 00:56:46 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:46.037845 | orchestrator | 2025-05-03 00:56:46 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:56:46.038357 | orchestrator | 2025-05-03 00:56:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:46.038546 | orchestrator | 2025-05-03 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:49.075795 | orchestrator | 2025-05-03 00:56:49 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:49.076323 | orchestrator | 2025-05-03 00:56:49 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:49.078417 | orchestrator | 2025-05-03 00:56:49 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:56:49.078984 | orchestrator | 2025-05-03 00:56:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:49.079094 | orchestrator | 2025-05-03 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:52.138385 | orchestrator | 2025-05-03 00:56:52 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:52.139024 | orchestrator | 2025-05-03 00:56:52 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:52.139567 | orchestrator | 2025-05-03 00:56:52 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:56:52.142638 | orchestrator | 2025-05-03 00:56:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:55.174567 | orchestrator | 2025-05-03 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:55.174686 | orchestrator | 2025-05-03 00:56:55 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:55.175221 | orchestrator | 2025-05-03 00:56:55 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:55.175281 | orchestrator | 2025-05-03 00:56:55 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:56:55.175854 | orchestrator | 2025-05-03 00:56:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:56:58.222101 | orchestrator | 2025-05-03 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:56:58.222266 | orchestrator | 2025-05-03 00:56:58 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:56:58.222734 | orchestrator | 2025-05-03 00:56:58 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:56:58.225914 | orchestrator | 2025-05-03 00:56:58 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:01.270590 | orchestrator | 2025-05-03 00:56:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:01.270713 | orchestrator | 2025-05-03 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:01.270751 | orchestrator | 2025-05-03 00:57:01 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:01.271133 | orchestrator | 2025-05-03 00:57:01 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:01.271951 | orchestrator | 2025-05-03 00:57:01 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:01.272679 | orchestrator | 2025-05-03 00:57:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:04.304474 | orchestrator | 2025-05-03 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:04.304648 | orchestrator | 2025-05-03 00:57:04 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:04.305120 | orchestrator | 2025-05-03 00:57:04 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:04.305158 | orchestrator | 2025-05-03 00:57:04 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:04.306101 | orchestrator | 2025-05-03 00:57:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:07.348815 | orchestrator | 2025-05-03 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:07.348979 | orchestrator | 2025-05-03 00:57:07 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:07.349509 | orchestrator | 2025-05-03 00:57:07 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:07.351437 | orchestrator | 2025-05-03 00:57:07 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:07.353111 | orchestrator | 2025-05-03 00:57:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:10.396271 | orchestrator | 2025-05-03 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:10.396428 | orchestrator | 2025-05-03 00:57:10 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:10.397021 | orchestrator | 2025-05-03 00:57:10 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:10.397044 | orchestrator | 2025-05-03 00:57:10 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:10.397946 | orchestrator | 2025-05-03 00:57:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:13.456563 | orchestrator | 2025-05-03 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:13.456683 | orchestrator | 2025-05-03 00:57:13 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:13.457808 | orchestrator | 2025-05-03 00:57:13 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:13.458965 | orchestrator | 2025-05-03 00:57:13 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:13.459928 | orchestrator | 2025-05-03 00:57:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:16.499848 | orchestrator | 2025-05-03 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:16.500026 | orchestrator | 2025-05-03 00:57:16 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:16.501382 | orchestrator | 2025-05-03 00:57:16 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:16.503328 | orchestrator | 2025-05-03 00:57:16 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:16.505086 | orchestrator | 2025-05-03 00:57:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:16.505501 | orchestrator | 2025-05-03 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:19.553511 | orchestrator | 2025-05-03 00:57:19 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:19.555289 | orchestrator | 2025-05-03 00:57:19 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:22.607051 | orchestrator | 2025-05-03 00:57:19 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:22.607180 | orchestrator | 2025-05-03 00:57:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:22.607233 | orchestrator | 2025-05-03 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:22.607267 | orchestrator | 2025-05-03 00:57:22 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:22.609662 | orchestrator | 2025-05-03 00:57:22 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:22.610536 | orchestrator | 2025-05-03 00:57:22 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:22.612526 | orchestrator | 2025-05-03 00:57:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:25.658305 | orchestrator | 2025-05-03 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:25.658426 | orchestrator | 2025-05-03 00:57:25 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:25.660607 | orchestrator | 2025-05-03 00:57:25 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:25.660721 | orchestrator | 2025-05-03 00:57:25 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:28.723395 | orchestrator | 2025-05-03 00:57:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:28.723550 | orchestrator | 2025-05-03 00:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:28.723591 | orchestrator | 2025-05-03 00:57:28 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:28.724971 | orchestrator | 2025-05-03 00:57:28 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:28.730347 | orchestrator | 2025-05-03 00:57:28 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:28.733821 | orchestrator | 2025-05-03 00:57:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:31.776874 | orchestrator | 2025-05-03 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:31.777041 | orchestrator | 2025-05-03 00:57:31 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:31.777451 | orchestrator | 2025-05-03 00:57:31 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:31.778654 | orchestrator | 2025-05-03 00:57:31 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:31.778717 | orchestrator | 2025-05-03 00:57:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:34.827318 | orchestrator | 2025-05-03 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:34.827506 | orchestrator | 2025-05-03 00:57:34 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:34.829295 | orchestrator | 2025-05-03 00:57:34 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:34.831134 | orchestrator | 2025-05-03 00:57:34 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:34.832385 | orchestrator | 2025-05-03 00:57:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:37.879052 | orchestrator | 2025-05-03 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:37.879319 | orchestrator | 2025-05-03 00:57:37 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:37.880803 | orchestrator | 2025-05-03 00:57:37 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:37.882227 | orchestrator | 2025-05-03 00:57:37 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:37.883346 | orchestrator | 2025-05-03 00:57:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:40.933618 | orchestrator | 2025-05-03 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:40.933754 | orchestrator | 2025-05-03 00:57:40 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:40.934774 | orchestrator | 2025-05-03 00:57:40 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:40.936346 | orchestrator | 2025-05-03 00:57:40 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:40.937703 | orchestrator | 2025-05-03 00:57:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:43.991443 | orchestrator | 2025-05-03 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:43.991584 | orchestrator | 2025-05-03 00:57:43 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:43.992753 | orchestrator | 2025-05-03 00:57:43 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:43.994478 | orchestrator | 2025-05-03 00:57:43 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:43.995477 | orchestrator | 2025-05-03 00:57:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:47.061761 | orchestrator | 2025-05-03 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:47.061905 | orchestrator | 2025-05-03 00:57:47 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:47.064487 | orchestrator | 2025-05-03 00:57:47 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:47.066133 | orchestrator | 2025-05-03 00:57:47 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:47.068207 | orchestrator | 2025-05-03 00:57:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:50.114707 | orchestrator | 2025-05-03 00:57:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:50.114873 | orchestrator | 2025-05-03 00:57:50 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:50.116640 | orchestrator | 2025-05-03 00:57:50 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:50.120650 | orchestrator | 2025-05-03 00:57:50 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:50.122626 | orchestrator | 2025-05-03 00:57:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:53.177455 | orchestrator | 2025-05-03 00:57:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:53.177602 | orchestrator | 2025-05-03 00:57:53 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:53.180081 | orchestrator | 2025-05-03 00:57:53 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:53.185499 | orchestrator | 2025-05-03 00:57:53 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:53.187131 | orchestrator | 2025-05-03 00:57:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:56.237274 | orchestrator | 2025-05-03 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:56.237405 | orchestrator | 2025-05-03 00:57:56 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:56.238716 | orchestrator | 2025-05-03 00:57:56 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:56.240077 | orchestrator | 2025-05-03 00:57:56 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:56.242529 | orchestrator | 2025-05-03 00:57:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:59.293794 | orchestrator | 2025-05-03 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:57:59.293971 | orchestrator | 2025-05-03 00:57:59 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:57:59.296177 | orchestrator | 2025-05-03 00:57:59 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:57:59.297483 | orchestrator | 2025-05-03 00:57:59 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:57:59.297522 | orchestrator | 2025-05-03 00:57:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:57:59.297618 | orchestrator | 2025-05-03 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:02.357531 | orchestrator | 2025-05-03 00:58:02 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:02.359115 | orchestrator | 2025-05-03 00:58:02 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:02.361596 | orchestrator | 2025-05-03 00:58:02 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:58:02.363281 | orchestrator | 2025-05-03 00:58:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:05.400660 | orchestrator | 2025-05-03 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:05.400793 | orchestrator | 2025-05-03 00:58:05 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:05.402968 | orchestrator | 2025-05-03 00:58:05 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:05.405404 | orchestrator | 2025-05-03 00:58:05 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:58:05.408403 | orchestrator | 2025-05-03 00:58:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:05.410501 | orchestrator | 2025-05-03 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:08.458369 | orchestrator | 2025-05-03 00:58:08 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:08.458864 | orchestrator | 2025-05-03 00:58:08 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:08.460235 | orchestrator | 2025-05-03 00:58:08 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state STARTED 2025-05-03 00:58:08.461168 | orchestrator | 2025-05-03 00:58:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:11.527640 | orchestrator | 2025-05-03 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:11.527795 | orchestrator | 2025-05-03 00:58:11 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:11.529384 | orchestrator | 2025-05-03 00:58:11 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:11.530949 | orchestrator | 2025-05-03 00:58:11 | INFO  | Task 4ce6c0b9-2ea9-4af2-a93b-25d204ecf785 is in state SUCCESS 2025-05-03 00:58:11.533056 | orchestrator | 2025-05-03 00:58:11.533190 | orchestrator | 2025-05-03 00:58:11.533211 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:58:11.533227 | orchestrator | 2025-05-03 00:58:11.533241 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:58:11.533273 | orchestrator | Saturday 03 May 2025 00:56:33 +0000 (0:00:00.322) 0:00:00.322 ********** 2025-05-03 00:58:11.533288 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.533303 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.533317 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.533331 | orchestrator | 2025-05-03 00:58:11.533345 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:58:11.533360 | orchestrator | Saturday 03 May 2025 00:56:34 +0000 (0:00:00.443) 0:00:00.766 ********** 2025-05-03 00:58:11.533662 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-03 00:58:11.533679 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-03 00:58:11.533693 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-03 00:58:11.533707 | orchestrator | 2025-05-03 00:58:11.533722 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-03 00:58:11.533736 | orchestrator | 2025-05-03 00:58:11.533750 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-03 00:58:11.533764 | orchestrator | Saturday 03 May 2025 00:56:34 +0000 (0:00:00.320) 0:00:01.087 ********** 2025-05-03 00:58:11.533779 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:58:11.533794 | orchestrator | 2025-05-03 00:58:11.533808 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-03 00:58:11.533822 | orchestrator | Saturday 03 May 2025 00:56:35 +0000 (0:00:00.802) 0:00:01.889 ********** 2025-05-03 00:58:11.533840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.533897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.533916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.533958 | orchestrator | 2025-05-03 00:58:11.533974 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-03 00:58:11.533988 | orchestrator | Saturday 03 May 2025 00:56:36 +0000 (0:00:01.690) 0:00:03.579 ********** 2025-05-03 00:58:11.534002 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.534060 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.534080 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.534094 | orchestrator | 2025-05-03 00:58:11.534136 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-03 00:58:11.534163 | orchestrator | Saturday 03 May 2025 00:56:37 +0000 (0:00:00.273) 0:00:03.853 ********** 2025-05-03 00:58:11.534188 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-03 00:58:11.534203 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-03 00:58:11.534217 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-03 00:58:11.534231 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-03 00:58:11.534245 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-03 00:58:11.534261 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-03 00:58:11.534277 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-03 00:58:11.534293 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-03 00:58:11.534308 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-03 00:58:11.534324 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-03 00:58:11.534339 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-03 00:58:11.534355 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-03 00:58:11.534371 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-03 00:58:11.534387 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-03 00:58:11.534402 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-03 00:58:11.534417 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-03 00:58:11.534441 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-03 00:58:11.534457 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-03 00:58:11.534474 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-03 00:58:11.534497 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-03 00:58:11.534514 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-03 00:58:11.534531 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-03 00:58:11.534554 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-03 00:58:11.534570 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-03 00:58:11.534586 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-03 00:58:11.534602 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-03 00:58:11.534617 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-03 00:58:11.534631 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-03 00:58:11.534645 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-03 00:58:11.534658 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-03 00:58:11.534672 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-03 00:58:11.534686 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-03 00:58:11.534700 | orchestrator | 2025-05-03 00:58:11.534714 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.534727 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:01.099) 0:00:04.952 ********** 2025-05-03 00:58:11.534741 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.534755 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.534769 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.534783 | orchestrator | 2025-05-03 00:58:11.534797 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.534810 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:00.436) 0:00:05.389 ********** 2025-05-03 00:58:11.534825 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.534839 | orchestrator | 2025-05-03 00:58:11.534859 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.534874 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:00.134) 0:00:05.523 ********** 2025-05-03 00:58:11.534888 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.534901 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.534916 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.534930 | orchestrator | 2025-05-03 00:58:11.534943 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.534964 | orchestrator | Saturday 03 May 2025 00:56:39 +0000 (0:00:00.437) 0:00:05.961 ********** 2025-05-03 00:58:11.534978 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.534992 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.535005 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.535024 | orchestrator | 2025-05-03 00:58:11.535039 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.535053 | orchestrator | Saturday 03 May 2025 00:56:39 +0000 (0:00:00.316) 0:00:06.278 ********** 2025-05-03 00:58:11.535066 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.535080 | orchestrator | 2025-05-03 00:58:11.535094 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.535136 | orchestrator | Saturday 03 May 2025 00:56:39 +0000 (0:00:00.250) 0:00:06.528 ********** 2025-05-03 00:58:11.535151 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.535165 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.535179 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.535193 | orchestrator | 2025-05-03 00:58:11.535207 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.535221 | orchestrator | Saturday 03 May 2025 00:56:40 +0000 (0:00:00.362) 0:00:06.891 ********** 2025-05-03 00:58:11.535235 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.535249 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.535263 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.535277 | orchestrator | 2025-05-03 00:58:11.535291 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.535304 | orchestrator | Saturday 03 May 2025 00:56:40 +0000 (0:00:00.578) 0:00:07.470 ********** 2025-05-03 00:58:11.535318 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.535332 | orchestrator | 2025-05-03 00:58:11.535346 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.535360 | orchestrator | Saturday 03 May 2025 00:56:41 +0000 (0:00:00.128) 0:00:07.598 ********** 2025-05-03 00:58:11.535374 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.535388 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.535402 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.535416 | orchestrator | 2025-05-03 00:58:11.535429 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.535444 | orchestrator | Saturday 03 May 2025 00:56:41 +0000 (0:00:00.432) 0:00:08.031 ********** 2025-05-03 00:58:11.535458 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.535472 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.535486 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.535499 | orchestrator | 2025-05-03 00:58:11.535513 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.535527 | orchestrator | Saturday 03 May 2025 00:56:41 +0000 (0:00:00.443) 0:00:08.475 ********** 2025-05-03 00:58:11.535541 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.535554 | orchestrator | 2025-05-03 00:58:11.535568 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.535582 | orchestrator | Saturday 03 May 2025 00:56:42 +0000 (0:00:00.126) 0:00:08.602 ********** 2025-05-03 00:58:11.535596 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.535610 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.535624 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.535637 | orchestrator | 2025-05-03 00:58:11.535651 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.535665 | orchestrator | Saturday 03 May 2025 00:56:42 +0000 (0:00:00.430) 0:00:09.032 ********** 2025-05-03 00:58:11.535679 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.535693 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.535707 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.535721 | orchestrator | 2025-05-03 00:58:11.535735 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.535748 | orchestrator | Saturday 03 May 2025 00:56:42 +0000 (0:00:00.308) 0:00:09.341 ********** 2025-05-03 00:58:11.535769 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.535783 | orchestrator | 2025-05-03 00:58:11.535797 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.535811 | orchestrator | Saturday 03 May 2025 00:56:42 +0000 (0:00:00.229) 0:00:09.571 ********** 2025-05-03 00:58:11.535825 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.535838 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.535853 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.535866 | orchestrator | 2025-05-03 00:58:11.535885 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.535900 | orchestrator | Saturday 03 May 2025 00:56:43 +0000 (0:00:00.285) 0:00:09.857 ********** 2025-05-03 00:58:11.535913 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.535927 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.535941 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.535955 | orchestrator | 2025-05-03 00:58:11.535969 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.535983 | orchestrator | Saturday 03 May 2025 00:56:43 +0000 (0:00:00.633) 0:00:10.491 ********** 2025-05-03 00:58:11.535997 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536011 | orchestrator | 2025-05-03 00:58:11.536024 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.536038 | orchestrator | Saturday 03 May 2025 00:56:44 +0000 (0:00:00.153) 0:00:10.645 ********** 2025-05-03 00:58:11.536052 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536065 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.536079 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.536094 | orchestrator | 2025-05-03 00:58:11.536160 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.536177 | orchestrator | Saturday 03 May 2025 00:56:44 +0000 (0:00:00.460) 0:00:11.105 ********** 2025-05-03 00:58:11.536198 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.536213 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.536227 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.536241 | orchestrator | 2025-05-03 00:58:11.536255 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.536269 | orchestrator | Saturday 03 May 2025 00:56:45 +0000 (0:00:00.637) 0:00:11.743 ********** 2025-05-03 00:58:11.536283 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536296 | orchestrator | 2025-05-03 00:58:11.536310 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.536330 | orchestrator | Saturday 03 May 2025 00:56:45 +0000 (0:00:00.134) 0:00:11.878 ********** 2025-05-03 00:58:11.536344 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536358 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.536372 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.536385 | orchestrator | 2025-05-03 00:58:11.536399 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.536412 | orchestrator | Saturday 03 May 2025 00:56:45 +0000 (0:00:00.414) 0:00:12.292 ********** 2025-05-03 00:58:11.536424 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.536436 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.536448 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.536461 | orchestrator | 2025-05-03 00:58:11.536473 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.536485 | orchestrator | Saturday 03 May 2025 00:56:46 +0000 (0:00:00.626) 0:00:12.919 ********** 2025-05-03 00:58:11.536497 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536509 | orchestrator | 2025-05-03 00:58:11.536522 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.536534 | orchestrator | Saturday 03 May 2025 00:56:46 +0000 (0:00:00.187) 0:00:13.106 ********** 2025-05-03 00:58:11.536546 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536558 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.536577 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.536590 | orchestrator | 2025-05-03 00:58:11.536602 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.536614 | orchestrator | Saturday 03 May 2025 00:56:46 +0000 (0:00:00.277) 0:00:13.383 ********** 2025-05-03 00:58:11.536627 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.536639 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.536652 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.536664 | orchestrator | 2025-05-03 00:58:11.536677 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.536689 | orchestrator | Saturday 03 May 2025 00:56:47 +0000 (0:00:00.506) 0:00:13.890 ********** 2025-05-03 00:58:11.536701 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536713 | orchestrator | 2025-05-03 00:58:11.536725 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.536737 | orchestrator | Saturday 03 May 2025 00:56:47 +0000 (0:00:00.186) 0:00:14.077 ********** 2025-05-03 00:58:11.536750 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536763 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.536783 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.536797 | orchestrator | 2025-05-03 00:58:11.536809 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.536822 | orchestrator | Saturday 03 May 2025 00:56:47 +0000 (0:00:00.473) 0:00:14.551 ********** 2025-05-03 00:58:11.536835 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.536847 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.536860 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.536872 | orchestrator | 2025-05-03 00:58:11.536885 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.536897 | orchestrator | Saturday 03 May 2025 00:56:48 +0000 (0:00:00.482) 0:00:15.033 ********** 2025-05-03 00:58:11.536909 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536921 | orchestrator | 2025-05-03 00:58:11.536933 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.536945 | orchestrator | Saturday 03 May 2025 00:56:48 +0000 (0:00:00.142) 0:00:15.175 ********** 2025-05-03 00:58:11.536957 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.536970 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.536982 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.536994 | orchestrator | 2025-05-03 00:58:11.537007 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-03 00:58:11.537019 | orchestrator | Saturday 03 May 2025 00:56:48 +0000 (0:00:00.410) 0:00:15.586 ********** 2025-05-03 00:58:11.537031 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:58:11.537043 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:58:11.537055 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:58:11.537067 | orchestrator | 2025-05-03 00:58:11.537084 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-03 00:58:11.537097 | orchestrator | Saturday 03 May 2025 00:56:49 +0000 (0:00:00.730) 0:00:16.316 ********** 2025-05-03 00:58:11.537126 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.537138 | orchestrator | 2025-05-03 00:58:11.537151 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-03 00:58:11.537163 | orchestrator | Saturday 03 May 2025 00:56:49 +0000 (0:00:00.244) 0:00:16.561 ********** 2025-05-03 00:58:11.537175 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.537187 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.537200 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.537212 | orchestrator | 2025-05-03 00:58:11.537224 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-03 00:58:11.537236 | orchestrator | Saturday 03 May 2025 00:56:50 +0000 (0:00:00.595) 0:00:17.157 ********** 2025-05-03 00:58:11.537248 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:58:11.537261 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:58:11.537273 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:58:11.537297 | orchestrator | 2025-05-03 00:58:11.537310 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-03 00:58:11.537322 | orchestrator | Saturday 03 May 2025 00:56:53 +0000 (0:00:03.275) 0:00:20.433 ********** 2025-05-03 00:58:11.537334 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-03 00:58:11.537352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-03 00:58:11.537365 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-03 00:58:11.537378 | orchestrator | 2025-05-03 00:58:11.537390 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-03 00:58:11.537402 | orchestrator | Saturday 03 May 2025 00:56:56 +0000 (0:00:02.786) 0:00:23.219 ********** 2025-05-03 00:58:11.537414 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-03 00:58:11.537427 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-03 00:58:11.537439 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-03 00:58:11.537451 | orchestrator | 2025-05-03 00:58:11.537464 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-03 00:58:11.537481 | orchestrator | Saturday 03 May 2025 00:56:59 +0000 (0:00:02.886) 0:00:26.106 ********** 2025-05-03 00:58:11.537494 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-03 00:58:11.537507 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-03 00:58:11.537519 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-03 00:58:11.537531 | orchestrator | 2025-05-03 00:58:11.537544 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-03 00:58:11.537556 | orchestrator | Saturday 03 May 2025 00:57:01 +0000 (0:00:02.326) 0:00:28.432 ********** 2025-05-03 00:58:11.537568 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.537581 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.537593 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.537605 | orchestrator | 2025-05-03 00:58:11.537617 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-03 00:58:11.537630 | orchestrator | Saturday 03 May 2025 00:57:02 +0000 (0:00:00.243) 0:00:28.676 ********** 2025-05-03 00:58:11.537642 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.537654 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.537666 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.537678 | orchestrator | 2025-05-03 00:58:11.537690 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-03 00:58:11.537702 | orchestrator | Saturday 03 May 2025 00:57:02 +0000 (0:00:00.315) 0:00:28.991 ********** 2025-05-03 00:58:11.537715 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:58:11.537727 | orchestrator | 2025-05-03 00:58:11.537740 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-03 00:58:11.537752 | orchestrator | Saturday 03 May 2025 00:57:02 +0000 (0:00:00.542) 0:00:29.534 ********** 2025-05-03 00:58:11.537772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.537793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.537814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.537835 | orchestrator | 2025-05-03 00:58:11.537847 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-03 00:58:11.537860 | orchestrator | Saturday 03 May 2025 00:57:04 +0000 (0:00:01.619) 0:00:31.154 ********** 2025-05-03 00:58:11.537873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:58:11.537893 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.537913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:58:11.537927 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.537940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:58:11.537959 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.537972 | orchestrator | 2025-05-03 00:58:11.537984 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-03 00:58:11.537997 | orchestrator | Saturday 03 May 2025 00:57:05 +0000 (0:00:00.876) 0:00:32.030 ********** 2025-05-03 00:58:11.538045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:58:11.538063 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.538076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:58:11.538096 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.538135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-03 00:58:11.538150 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.538163 | orchestrator | 2025-05-03 00:58:11.538175 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-03 00:58:11.538187 | orchestrator | Saturday 03 May 2025 00:57:06 +0000 (0:00:01.088) 0:00:33.119 ********** 2025-05-03 00:58:11.538205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.538226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.538252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-03 00:58:11.538267 | orchestrator | 2025-05-03 00:58:11.538279 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-03 00:58:11.538291 | orchestrator | Saturday 03 May 2025 00:57:11 +0000 (0:00:05.135) 0:00:38.254 ********** 2025-05-03 00:58:11.538304 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:58:11.538316 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:58:11.538328 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:58:11.538340 | orchestrator | 2025-05-03 00:58:11.538353 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-03 00:58:11.538365 | orchestrator | Saturday 03 May 2025 00:57:12 +0000 (0:00:00.341) 0:00:38.596 ********** 2025-05-03 00:58:11.538377 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:58:11.538390 | orchestrator | 2025-05-03 00:58:11.538402 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-03 00:58:11.538414 | orchestrator | Saturday 03 May 2025 00:57:12 +0000 (0:00:00.478) 0:00:39.075 ********** 2025-05-03 00:58:11.538427 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:58:11.538439 | orchestrator | 2025-05-03 00:58:11.538456 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-03 00:58:11.538468 | orchestrator | Saturday 03 May 2025 00:57:14 +0000 (0:00:02.399) 0:00:41.474 ********** 2025-05-03 00:58:11.538480 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:58:11.538493 | orchestrator | 2025-05-03 00:58:11.538505 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-03 00:58:11.538523 | orchestrator | Saturday 03 May 2025 00:57:17 +0000 (0:00:02.170) 0:00:43.644 ********** 2025-05-03 00:58:11.538535 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:58:11.538548 | orchestrator | 2025-05-03 00:58:11.538560 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-03 00:58:11.538572 | orchestrator | Saturday 03 May 2025 00:57:30 +0000 (0:00:13.464) 0:00:57.109 ********** 2025-05-03 00:58:11.538585 | orchestrator | 2025-05-03 00:58:11.538597 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-03 00:58:11.538609 | orchestrator | Saturday 03 May 2025 00:57:30 +0000 (0:00:00.060) 0:00:57.169 ********** 2025-05-03 00:58:11.538621 | orchestrator | 2025-05-03 00:58:11.538633 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-03 00:58:11.538645 | orchestrator | Saturday 03 May 2025 00:57:30 +0000 (0:00:00.190) 0:00:57.360 ********** 2025-05-03 00:58:11.538657 | orchestrator | 2025-05-03 00:58:11.538669 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-03 00:58:11.538682 | orchestrator | Saturday 03 May 2025 00:57:30 +0000 (0:00:00.060) 0:00:57.420 ********** 2025-05-03 00:58:11.538694 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:58:11.538706 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:58:11.538718 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:58:11.538730 | orchestrator | 2025-05-03 00:58:11.538743 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:58:11.538755 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-03 00:58:11.538768 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-03 00:58:11.538780 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-03 00:58:11.538793 | orchestrator | 2025-05-03 00:58:11.538805 | orchestrator | 2025-05-03 00:58:11.538817 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:58:11.538829 | orchestrator | Saturday 03 May 2025 00:58:10 +0000 (0:00:39.648) 0:01:37.069 ********** 2025-05-03 00:58:11.538842 | orchestrator | =============================================================================== 2025-05-03 00:58:11.538854 | orchestrator | horizon : Restart horizon container ------------------------------------ 39.65s 2025-05-03 00:58:11.538866 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.46s 2025-05-03 00:58:11.538878 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.14s 2025-05-03 00:58:11.538890 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.28s 2025-05-03 00:58:11.538902 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.89s 2025-05-03 00:58:11.538915 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.79s 2025-05-03 00:58:11.538927 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.40s 2025-05-03 00:58:11.538939 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.33s 2025-05-03 00:58:11.538951 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.17s 2025-05-03 00:58:11.538964 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.69s 2025-05-03 00:58:11.538976 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.62s 2025-05-03 00:58:11.538988 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.10s 2025-05-03 00:58:11.539000 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.09s 2025-05-03 00:58:11.539017 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.88s 2025-05-03 00:58:14.582801 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-05-03 00:58:14.582980 | orchestrator | horizon : Update policy file name --------------------------------------- 0.73s 2025-05-03 00:58:14.583017 | orchestrator | horizon : Update policy file name --------------------------------------- 0.64s 2025-05-03 00:58:14.583044 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2025-05-03 00:58:14.583071 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2025-05-03 00:58:14.583094 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2025-05-03 00:58:14.583151 | orchestrator | 2025-05-03 00:58:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:14.583167 | orchestrator | 2025-05-03 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:14.583200 | orchestrator | 2025-05-03 00:58:14 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:14.583684 | orchestrator | 2025-05-03 00:58:14 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:14.584861 | orchestrator | 2025-05-03 00:58:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:17.639479 | orchestrator | 2025-05-03 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:17.639639 | orchestrator | 2025-05-03 00:58:17 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:17.642536 | orchestrator | 2025-05-03 00:58:17 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:17.644067 | orchestrator | 2025-05-03 00:58:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:20.690644 | orchestrator | 2025-05-03 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:20.690784 | orchestrator | 2025-05-03 00:58:20 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:20.691930 | orchestrator | 2025-05-03 00:58:20 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:20.694108 | orchestrator | 2025-05-03 00:58:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:23.743977 | orchestrator | 2025-05-03 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:23.744170 | orchestrator | 2025-05-03 00:58:23 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:23.744408 | orchestrator | 2025-05-03 00:58:23 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:23.745292 | orchestrator | 2025-05-03 00:58:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:26.800394 | orchestrator | 2025-05-03 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:26.800557 | orchestrator | 2025-05-03 00:58:26 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:26.802146 | orchestrator | 2025-05-03 00:58:26 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:26.803646 | orchestrator | 2025-05-03 00:58:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:26.803773 | orchestrator | 2025-05-03 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:29.856123 | orchestrator | 2025-05-03 00:58:29 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state STARTED 2025-05-03 00:58:29.857228 | orchestrator | 2025-05-03 00:58:29 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:29.858971 | orchestrator | 2025-05-03 00:58:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:29.859158 | orchestrator | 2025-05-03 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:32.922385 | orchestrator | 2025-05-03 00:58:32 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:32.922815 | orchestrator | 2025-05-03 00:58:32 | INFO  | Task d2b616f1-9e66-420b-ad37-3fab543d8765 is in state SUCCESS 2025-05-03 00:58:32.925971 | orchestrator | 2025-05-03 00:58:32.926544 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-03 00:58:32.926565 | orchestrator | 2025-05-03 00:58:32.926580 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-03 00:58:32.926595 | orchestrator | 2025-05-03 00:58:32.926609 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-03 00:58:32.926623 | orchestrator | Saturday 03 May 2025 00:56:23 +0000 (0:00:01.180) 0:00:01.180 ********** 2025-05-03 00:58:32.926638 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:58:32.926653 | orchestrator | 2025-05-03 00:58:32.926667 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-03 00:58:32.926681 | orchestrator | Saturday 03 May 2025 00:56:24 +0000 (0:00:00.534) 0:00:01.714 ********** 2025-05-03 00:58:32.926696 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-03 00:58:32.926710 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-03 00:58:32.926723 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-03 00:58:32.926737 | orchestrator | 2025-05-03 00:58:32.926751 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-03 00:58:32.926765 | orchestrator | Saturday 03 May 2025 00:56:25 +0000 (0:00:00.835) 0:00:02.550 ********** 2025-05-03 00:58:32.926779 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:58:32.926793 | orchestrator | 2025-05-03 00:58:32.926807 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-03 00:58:32.926821 | orchestrator | Saturday 03 May 2025 00:56:25 +0000 (0:00:00.776) 0:00:03.326 ********** 2025-05-03 00:58:32.926834 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.926849 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.926863 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.926876 | orchestrator | 2025-05-03 00:58:32.926890 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-03 00:58:32.926904 | orchestrator | Saturday 03 May 2025 00:56:26 +0000 (0:00:00.647) 0:00:03.974 ********** 2025-05-03 00:58:32.926918 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.926931 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.926945 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.926959 | orchestrator | 2025-05-03 00:58:32.926973 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-03 00:58:32.926987 | orchestrator | Saturday 03 May 2025 00:56:26 +0000 (0:00:00.359) 0:00:04.334 ********** 2025-05-03 00:58:32.927000 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.927014 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.927027 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.927042 | orchestrator | 2025-05-03 00:58:32.927058 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-03 00:58:32.927113 | orchestrator | Saturday 03 May 2025 00:56:27 +0000 (0:00:00.887) 0:00:05.221 ********** 2025-05-03 00:58:32.927131 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.927147 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.927180 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.927197 | orchestrator | 2025-05-03 00:58:32.927213 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-03 00:58:32.927229 | orchestrator | Saturday 03 May 2025 00:56:28 +0000 (0:00:00.357) 0:00:05.578 ********** 2025-05-03 00:58:32.927267 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.927284 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.927300 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.927315 | orchestrator | 2025-05-03 00:58:32.927331 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-03 00:58:32.927347 | orchestrator | Saturday 03 May 2025 00:56:28 +0000 (0:00:00.321) 0:00:05.900 ********** 2025-05-03 00:58:32.927363 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.927378 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.927394 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.927410 | orchestrator | 2025-05-03 00:58:32.927424 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-03 00:58:32.927438 | orchestrator | Saturday 03 May 2025 00:56:28 +0000 (0:00:00.403) 0:00:06.303 ********** 2025-05-03 00:58:32.927452 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.927466 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.927480 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.927494 | orchestrator | 2025-05-03 00:58:32.927507 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-03 00:58:32.927605 | orchestrator | Saturday 03 May 2025 00:56:29 +0000 (0:00:00.513) 0:00:06.816 ********** 2025-05-03 00:58:32.927623 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.927721 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.927737 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.927751 | orchestrator | 2025-05-03 00:58:32.927765 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-03 00:58:32.927779 | orchestrator | Saturday 03 May 2025 00:56:29 +0000 (0:00:00.295) 0:00:07.112 ********** 2025-05-03 00:58:32.927793 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-03 00:58:32.927813 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:58:32.927828 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:58:32.927842 | orchestrator | 2025-05-03 00:58:32.927856 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-03 00:58:32.927869 | orchestrator | Saturday 03 May 2025 00:56:30 +0000 (0:00:00.727) 0:00:07.839 ********** 2025-05-03 00:58:32.927883 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.927897 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.927911 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.927924 | orchestrator | 2025-05-03 00:58:32.927938 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-03 00:58:32.927960 | orchestrator | Saturday 03 May 2025 00:56:30 +0000 (0:00:00.470) 0:00:08.310 ********** 2025-05-03 00:58:32.928000 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-03 00:58:32.928027 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:58:32.928052 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:58:32.928067 | orchestrator | 2025-05-03 00:58:32.928109 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-03 00:58:32.928124 | orchestrator | Saturday 03 May 2025 00:56:33 +0000 (0:00:02.350) 0:00:10.660 ********** 2025-05-03 00:58:32.928138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:58:32.928152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:58:32.928166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:58:32.928180 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.928194 | orchestrator | 2025-05-03 00:58:32.928208 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-03 00:58:32.928222 | orchestrator | Saturday 03 May 2025 00:56:33 +0000 (0:00:00.560) 0:00:11.221 ********** 2025-05-03 00:58:32.928237 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-03 00:58:32.928267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-03 00:58:32.928282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-03 00:58:32.928296 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.928310 | orchestrator | 2025-05-03 00:58:32.928324 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-03 00:58:32.928338 | orchestrator | Saturday 03 May 2025 00:56:34 +0000 (0:00:00.659) 0:00:11.881 ********** 2025-05-03 00:58:32.928356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:58:32.928373 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:58:32.928390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:58:32.928406 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.928423 | orchestrator | 2025-05-03 00:58:32.928439 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-03 00:58:32.928454 | orchestrator | Saturday 03 May 2025 00:56:34 +0000 (0:00:00.152) 0:00:12.034 ********** 2025-05-03 00:58:32.928472 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '33bba94c896d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-03 00:56:31.717767', 'end': '2025-05-03 00:56:31.769708', 'delta': '0:00:00.051941', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['33bba94c896d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-03 00:58:32.928508 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'de0c6f38c246', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-03 00:56:32.310831', 'end': '2025-05-03 00:56:32.346624', 'delta': '0:00:00.035793', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de0c6f38c246'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-03 00:58:32.928535 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'd2bb2bc0e317', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-03 00:56:32.851683', 'end': '2025-05-03 00:56:32.889388', 'delta': '0:00:00.037705', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d2bb2bc0e317'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-03 00:58:32.928551 | orchestrator | 2025-05-03 00:58:32.928567 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-03 00:58:32.928582 | orchestrator | Saturday 03 May 2025 00:56:34 +0000 (0:00:00.247) 0:00:12.282 ********** 2025-05-03 00:58:32.928597 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.928613 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.928628 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.928643 | orchestrator | 2025-05-03 00:58:32.928659 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-03 00:58:32.928675 | orchestrator | Saturday 03 May 2025 00:56:35 +0000 (0:00:00.538) 0:00:12.820 ********** 2025-05-03 00:58:32.928691 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-03 00:58:32.928706 | orchestrator | 2025-05-03 00:58:32.928720 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-03 00:58:32.928734 | orchestrator | Saturday 03 May 2025 00:56:36 +0000 (0:00:01.334) 0:00:14.154 ********** 2025-05-03 00:58:32.928748 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.928762 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.928776 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.928790 | orchestrator | 2025-05-03 00:58:32.928804 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-03 00:58:32.928818 | orchestrator | Saturday 03 May 2025 00:56:37 +0000 (0:00:00.498) 0:00:14.652 ********** 2025-05-03 00:58:32.928832 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.928845 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.928859 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.928873 | orchestrator | 2025-05-03 00:58:32.928887 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-03 00:58:32.928900 | orchestrator | Saturday 03 May 2025 00:56:37 +0000 (0:00:00.496) 0:00:15.149 ********** 2025-05-03 00:58:32.928914 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.928928 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.928942 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.928956 | orchestrator | 2025-05-03 00:58:32.928970 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-03 00:58:32.928984 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:00.309) 0:00:15.458 ********** 2025-05-03 00:58:32.928997 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.929011 | orchestrator | 2025-05-03 00:58:32.929025 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-03 00:58:32.929039 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:00.126) 0:00:15.584 ********** 2025-05-03 00:58:32.929053 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.929067 | orchestrator | 2025-05-03 00:58:32.929139 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-03 00:58:32.929161 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:00.221) 0:00:15.806 ********** 2025-05-03 00:58:32.929176 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.929190 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.929204 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.929226 | orchestrator | 2025-05-03 00:58:32.929240 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-03 00:58:32.929254 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:00.501) 0:00:16.308 ********** 2025-05-03 00:58:32.929267 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.929281 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.929295 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.929309 | orchestrator | 2025-05-03 00:58:32.929323 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-03 00:58:32.929337 | orchestrator | Saturday 03 May 2025 00:56:39 +0000 (0:00:00.326) 0:00:16.634 ********** 2025-05-03 00:58:32.929351 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.929365 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.929379 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.929393 | orchestrator | 2025-05-03 00:58:32.929407 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-03 00:58:32.929421 | orchestrator | Saturday 03 May 2025 00:56:39 +0000 (0:00:00.337) 0:00:16.972 ********** 2025-05-03 00:58:32.929435 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.929449 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.929470 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.929485 | orchestrator | 2025-05-03 00:58:32.929499 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-03 00:58:32.929513 | orchestrator | Saturday 03 May 2025 00:56:39 +0000 (0:00:00.295) 0:00:17.268 ********** 2025-05-03 00:58:32.929526 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.929539 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.929551 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.929563 | orchestrator | 2025-05-03 00:58:32.929576 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-03 00:58:32.929588 | orchestrator | Saturday 03 May 2025 00:56:40 +0000 (0:00:00.574) 0:00:17.843 ********** 2025-05-03 00:58:32.929600 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.929613 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.929625 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.929642 | orchestrator | 2025-05-03 00:58:32.929655 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-03 00:58:32.929667 | orchestrator | Saturday 03 May 2025 00:56:40 +0000 (0:00:00.367) 0:00:18.210 ********** 2025-05-03 00:58:32.929680 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.929692 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.929705 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.929717 | orchestrator | 2025-05-03 00:58:32.929730 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-03 00:58:32.929742 | orchestrator | Saturday 03 May 2025 00:56:41 +0000 (0:00:00.346) 0:00:18.557 ********** 2025-05-03 00:58:32.929756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eca5292b--8794--515a--ad73--b5efc7970d6a-osd--block--eca5292b--8794--515a--ad73--b5efc7970d6a', 'dm-uuid-LVM-5wi2Uys0qhygBUkChs5OXnVhGMzfG0GakB9L4O31j5FxXitHvecuhqod6eK6c34C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7a18630--ef35--59a0--a2f0--363b4ab3cd76-osd--block--a7a18630--ef35--59a0--a2f0--363b4ab3cd76', 'dm-uuid-LVM-kp9n5HxxuNkKHyP78qbcuszFm7e3CGahgfk8tFiDTv0tEIu3EeQmgY7AnN6kuQeo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba494882--e80b--5600--bb3d--47da88e10312-osd--block--ba494882--e80b--5600--bb3d--47da88e10312', 'dm-uuid-LVM-yDJJ83ZO7AdoFPcoVMO7Rk06u8j52pHc3X42D0qUuRvfNM5xXORgoiyqmQUibPgv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1900210e--f5cf--596b--8948--bbf6ca001e1a-osd--block--1900210e--f5cf--596b--8948--bbf6ca001e1a', 'dm-uuid-LVM-HYRfKS28EYFp3oxOfvep8OhgS2R4Om6mRKPWOU1bJ0PDKkMQaEu3Pm2bL5pdCURq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.929986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d13b5c6-5c37-4969-9bdf-e1b816fbff4c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--eca5292b--8794--515a--ad73--b5efc7970d6a-osd--block--eca5292b--8794--515a--ad73--b5efc7970d6a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OGM7bt-6Lws-mzpe-FKub-u2Iw-7z2j-EBw5Od', 'scsi-0QEMU_QEMU_HARDDISK_60710ea4-1ba5-44da-b34f-cb4cc5f20e97', 'scsi-SQEMU_QEMU_HARDDISK_60710ea4-1ba5-44da-b34f-cb4cc5f20e97'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a7a18630--ef35--59a0--a2f0--363b4ab3cd76-osd--block--a7a18630--ef35--59a0--a2f0--363b4ab3cd76'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vaDo6X-Qoz1-R4DZ-qU2b-jOxG-jAxc-s9G0Cj', 'scsi-0QEMU_QEMU_HARDDISK_494ae4e2-fb03-468f-bae0-ffa1e1c51b21', 'scsi-SQEMU_QEMU_HARDDISK_494ae4e2-fb03-468f-bae0-ffa1e1c51b21'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbc89abf-41e4-403a-af47-fe4d6db2bcc8', 'scsi-SQEMU_QEMU_HARDDISK_fbc89abf-41e4-403a-af47-fe4d6db2bcc8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930230 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.930243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--63c4e6bd--963b--5ec8--a8d0--e52c79716553-osd--block--63c4e6bd--963b--5ec8--a8d0--e52c79716553', 'dm-uuid-LVM-MiWTDaoZ0DQk8f75uPQZInv663LTp1egWVeD9SImLQUjMxdE5G2TMgiKy3wAIc5l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0db6d06--6fa6--557d--977f--52f0cf84ead8-osd--block--f0db6d06--6fa6--557d--977f--52f0cf84ead8', 'dm-uuid-LVM-FeQ4dS3xIArOhUe4AB0NAWIFxklHuD7CePqpl3uW2Y9xGDRtr08HIoWKaUXHu5fi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part1', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part14', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part15', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part16', 'scsi-SQEMU_QEMU_HARDDISK_80e5cdca-5597-4b73-960a-f9a5fdfd6b66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ba494882--e80b--5600--bb3d--47da88e10312-osd--block--ba494882--e80b--5600--bb3d--47da88e10312'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bIXglJ-OLEb-NbWl-oOub-R10M-mHYP-C3V7kQ', 'scsi-0QEMU_QEMU_HARDDISK_8a5c9f8e-4062-4859-b774-db2eb35d9068', 'scsi-SQEMU_QEMU_HARDDISK_8a5c9f8e-4062-4859-b774-db2eb35d9068'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1900210e--f5cf--596b--8948--bbf6ca001e1a-osd--block--1900210e--f5cf--596b--8948--bbf6ca001e1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qVneO7-0GKD-f1CR-Y8v7-JIC4-Z8Uw-v4Jeo8', 'scsi-0QEMU_QEMU_HARDDISK_6a5303a2-e8ba-422b-a7dc-ef5d91cab650', 'scsi-SQEMU_QEMU_HARDDISK_6a5303a2-e8ba-422b-a7dc-ef5d91cab650'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78b7c3f7-b361-43c7-bb55-097042834471', 'scsi-SQEMU_QEMU_HARDDISK_78b7c3f7-b361-43c7-bb55-097042834471'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930484 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.930503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:58:32.930522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part1', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part14', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part15', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part16', 'scsi-SQEMU_QEMU_HARDDISK_4f7b2d31-8f7a-47ec-8821-2cb523ca656c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--63c4e6bd--963b--5ec8--a8d0--e52c79716553-osd--block--63c4e6bd--963b--5ec8--a8d0--e52c79716553'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2J3MGh-yNz7-dSNS-ORTt-jcBj-2ntY-G0OcM3', 'scsi-0QEMU_QEMU_HARDDISK_626178b7-dd78-4872-a9d6-22f12232405d', 'scsi-SQEMU_QEMU_HARDDISK_626178b7-dd78-4872-a9d6-22f12232405d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f0db6d06--6fa6--557d--977f--52f0cf84ead8-osd--block--f0db6d06--6fa6--557d--977f--52f0cf84ead8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ijisTF-blyr-zhT5-NEtO-Qk9g-ruUm-tjRUjw', 'scsi-0QEMU_QEMU_HARDDISK_cf08c0e7-08ad-4d2d-8710-ce05fc114cf2', 'scsi-SQEMU_QEMU_HARDDISK_cf08c0e7-08ad-4d2d-8710-ce05fc114cf2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_592c23d3-c323-4834-ad18-db1726824a9d', 'scsi-SQEMU_QEMU_HARDDISK_592c23d3-c323-4834-ad18-db1726824a9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:58:32.930616 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.930629 | orchestrator | 2025-05-03 00:58:32.930642 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-03 00:58:32.930654 | orchestrator | Saturday 03 May 2025 00:56:41 +0000 (0:00:00.643) 0:00:19.200 ********** 2025-05-03 00:58:32.930667 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-03 00:58:32.930679 | orchestrator | 2025-05-03 00:58:32.930692 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-03 00:58:32.930704 | orchestrator | Saturday 03 May 2025 00:56:43 +0000 (0:00:01.467) 0:00:20.668 ********** 2025-05-03 00:58:32.930717 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.930729 | orchestrator | 2025-05-03 00:58:32.930742 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-03 00:58:32.930754 | orchestrator | Saturday 03 May 2025 00:56:43 +0000 (0:00:00.157) 0:00:20.825 ********** 2025-05-03 00:58:32.930766 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.930779 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.930791 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.930803 | orchestrator | 2025-05-03 00:58:32.930816 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-03 00:58:32.930828 | orchestrator | Saturday 03 May 2025 00:56:43 +0000 (0:00:00.391) 0:00:21.217 ********** 2025-05-03 00:58:32.930840 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.930853 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.930865 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.930877 | orchestrator | 2025-05-03 00:58:32.930890 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-03 00:58:32.930902 | orchestrator | Saturday 03 May 2025 00:56:44 +0000 (0:00:00.771) 0:00:21.988 ********** 2025-05-03 00:58:32.930914 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.930927 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.930939 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.930951 | orchestrator | 2025-05-03 00:58:32.930964 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-03 00:58:32.930976 | orchestrator | Saturday 03 May 2025 00:56:44 +0000 (0:00:00.368) 0:00:22.357 ********** 2025-05-03 00:58:32.930988 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.931000 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.931013 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.931025 | orchestrator | 2025-05-03 00:58:32.931037 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-03 00:58:32.931050 | orchestrator | Saturday 03 May 2025 00:56:45 +0000 (0:00:00.929) 0:00:23.286 ********** 2025-05-03 00:58:32.931062 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.931129 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.931145 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.931158 | orchestrator | 2025-05-03 00:58:32.931170 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-03 00:58:32.931183 | orchestrator | Saturday 03 May 2025 00:56:46 +0000 (0:00:00.350) 0:00:23.637 ********** 2025-05-03 00:58:32.931196 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.931208 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.931220 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.931233 | orchestrator | 2025-05-03 00:58:32.931246 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-03 00:58:32.931258 | orchestrator | Saturday 03 May 2025 00:56:46 +0000 (0:00:00.567) 0:00:24.204 ********** 2025-05-03 00:58:32.931271 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.931283 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.931296 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.931308 | orchestrator | 2025-05-03 00:58:32.931320 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-03 00:58:32.931333 | orchestrator | Saturday 03 May 2025 00:56:47 +0000 (0:00:00.342) 0:00:24.546 ********** 2025-05-03 00:58:32.931352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:58:32.931364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:58:32.931377 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:58:32.931390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:58:32.931402 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.931419 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:58:32.931432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:58:32.931445 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:58:32.931457 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.931470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:58:32.931482 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:58:32.931495 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.931507 | orchestrator | 2025-05-03 00:58:32.931520 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-03 00:58:32.931538 | orchestrator | Saturday 03 May 2025 00:56:48 +0000 (0:00:01.099) 0:00:25.646 ********** 2025-05-03 00:58:32.931551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:58:32.931564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:58:32.931576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:58:32.931589 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:58:32.931601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:58:32.931614 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.931626 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:58:32.931639 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:58:32.931649 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.931659 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:58:32.931674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:58:32.931694 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.931711 | orchestrator | 2025-05-03 00:58:32.931721 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-03 00:58:32.931732 | orchestrator | Saturday 03 May 2025 00:56:48 +0000 (0:00:00.754) 0:00:26.400 ********** 2025-05-03 00:58:32.931742 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-03 00:58:32.931752 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-03 00:58:32.931762 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-03 00:58:32.931772 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-03 00:58:32.931782 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-03 00:58:32.931792 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-03 00:58:32.931802 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-03 00:58:32.931812 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-03 00:58:32.931822 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-03 00:58:32.931833 | orchestrator | 2025-05-03 00:58:32.931844 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-03 00:58:32.931856 | orchestrator | Saturday 03 May 2025 00:56:51 +0000 (0:00:02.251) 0:00:28.652 ********** 2025-05-03 00:58:32.931874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:58:32.931889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:58:32.931899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:58:32.931909 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.932099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:58:32.932115 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:58:32.932133 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:58:32.932143 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.932154 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:58:32.932164 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:58:32.932174 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:58:32.932184 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.932194 | orchestrator | 2025-05-03 00:58:32.932204 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-03 00:58:32.932214 | orchestrator | Saturday 03 May 2025 00:56:51 +0000 (0:00:00.640) 0:00:29.293 ********** 2025-05-03 00:58:32.932224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-03 00:58:32.932235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-03 00:58:32.932245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-03 00:58:32.932255 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-03 00:58:32.932265 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-03 00:58:32.932275 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.932285 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-03 00:58:32.932295 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.932305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-03 00:58:32.932315 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-03 00:58:32.932326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-03 00:58:32.932336 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.932346 | orchestrator | 2025-05-03 00:58:32.932356 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-03 00:58:32.932366 | orchestrator | Saturday 03 May 2025 00:56:52 +0000 (0:00:00.630) 0:00:29.923 ********** 2025-05-03 00:58:32.932376 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-03 00:58:32.932387 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:58:32.932397 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:58:32.932407 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-03 00:58:32.932418 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:58:32.932428 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.932438 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:58:32.932448 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.932458 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-03 00:58:32.932475 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:58:32.932485 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:58:32.932496 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.932506 | orchestrator | 2025-05-03 00:58:32.932516 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-03 00:58:32.932527 | orchestrator | Saturday 03 May 2025 00:56:52 +0000 (0:00:00.303) 0:00:30.226 ********** 2025-05-03 00:58:32.932537 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 00:58:32.932547 | orchestrator | 2025-05-03 00:58:32.932558 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-03 00:58:32.932568 | orchestrator | Saturday 03 May 2025 00:56:53 +0000 (0:00:00.548) 0:00:30.775 ********** 2025-05-03 00:58:32.932585 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.932595 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.932605 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.932615 | orchestrator | 2025-05-03 00:58:32.932625 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-03 00:58:32.932635 | orchestrator | Saturday 03 May 2025 00:56:53 +0000 (0:00:00.281) 0:00:31.057 ********** 2025-05-03 00:58:32.932645 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.932655 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.932665 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.932675 | orchestrator | 2025-05-03 00:58:32.932685 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-03 00:58:32.932695 | orchestrator | Saturday 03 May 2025 00:56:53 +0000 (0:00:00.302) 0:00:31.360 ********** 2025-05-03 00:58:32.932705 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.932715 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.932729 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.932740 | orchestrator | 2025-05-03 00:58:32.932750 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-03 00:58:32.932761 | orchestrator | Saturday 03 May 2025 00:56:54 +0000 (0:00:00.337) 0:00:31.697 ********** 2025-05-03 00:58:32.932771 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.932781 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.932792 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.932802 | orchestrator | 2025-05-03 00:58:32.932812 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-03 00:58:32.932822 | orchestrator | Saturday 03 May 2025 00:56:54 +0000 (0:00:00.661) 0:00:32.358 ********** 2025-05-03 00:58:32.932832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:58:32.932842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:58:32.932853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:58:32.932863 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.932873 | orchestrator | 2025-05-03 00:58:32.932883 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-03 00:58:32.932894 | orchestrator | Saturday 03 May 2025 00:56:55 +0000 (0:00:00.361) 0:00:32.719 ********** 2025-05-03 00:58:32.932904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:58:32.932914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:58:32.932927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:58:32.932938 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.932948 | orchestrator | 2025-05-03 00:58:32.932958 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-03 00:58:32.932968 | orchestrator | Saturday 03 May 2025 00:56:55 +0000 (0:00:00.349) 0:00:33.069 ********** 2025-05-03 00:58:32.932978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:58:32.932988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:58:32.932998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:58:32.933009 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.933019 | orchestrator | 2025-05-03 00:58:32.933029 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:58:32.933040 | orchestrator | Saturday 03 May 2025 00:56:55 +0000 (0:00:00.335) 0:00:33.404 ********** 2025-05-03 00:58:32.933050 | orchestrator | ok: [testbed-node-3] 2025-05-03 00:58:32.933060 | orchestrator | ok: [testbed-node-4] 2025-05-03 00:58:32.933087 | orchestrator | ok: [testbed-node-5] 2025-05-03 00:58:32.933099 | orchestrator | 2025-05-03 00:58:32.933109 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-03 00:58:32.933123 | orchestrator | Saturday 03 May 2025 00:56:56 +0000 (0:00:00.292) 0:00:33.696 ********** 2025-05-03 00:58:32.933142 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-03 00:58:32.933152 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-03 00:58:32.933163 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-03 00:58:32.933173 | orchestrator | 2025-05-03 00:58:32.933183 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-03 00:58:32.933194 | orchestrator | Saturday 03 May 2025 00:56:56 +0000 (0:00:00.431) 0:00:34.127 ********** 2025-05-03 00:58:32.933204 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.933214 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.933225 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.933235 | orchestrator | 2025-05-03 00:58:32.933245 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-03 00:58:32.933255 | orchestrator | Saturday 03 May 2025 00:56:57 +0000 (0:00:00.536) 0:00:34.664 ********** 2025-05-03 00:58:32.933265 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.933276 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.933286 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.933296 | orchestrator | 2025-05-03 00:58:32.933306 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-03 00:58:32.933322 | orchestrator | Saturday 03 May 2025 00:56:57 +0000 (0:00:00.426) 0:00:35.090 ********** 2025-05-03 00:58:32.933332 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-03 00:58:32.933343 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.933353 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-03 00:58:32.933363 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.933373 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-03 00:58:32.933383 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.933393 | orchestrator | 2025-05-03 00:58:32.933404 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-03 00:58:32.933414 | orchestrator | Saturday 03 May 2025 00:56:58 +0000 (0:00:00.496) 0:00:35.587 ********** 2025-05-03 00:58:32.933424 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-03 00:58:32.933435 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.933445 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-03 00:58:32.933455 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.933466 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-03 00:58:32.933476 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.933486 | orchestrator | 2025-05-03 00:58:32.933496 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-03 00:58:32.933506 | orchestrator | Saturday 03 May 2025 00:56:58 +0000 (0:00:00.362) 0:00:35.949 ********** 2025-05-03 00:58:32.933516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-03 00:58:32.933526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-03 00:58:32.933536 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-03 00:58:32.933546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-03 00:58:32.933556 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.933567 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-03 00:58:32.933577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-03 00:58:32.933587 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-03 00:58:32.933597 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.933607 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-03 00:58:32.933617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-03 00:58:32.933627 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.933637 | orchestrator | 2025-05-03 00:58:32.933653 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-03 00:58:32.933663 | orchestrator | Saturday 03 May 2025 00:56:59 +0000 (0:00:00.792) 0:00:36.742 ********** 2025-05-03 00:58:32.933673 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.933683 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.933693 | orchestrator | skipping: [testbed-node-5] 2025-05-03 00:58:32.933703 | orchestrator | 2025-05-03 00:58:32.933714 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-03 00:58:32.933724 | orchestrator | Saturday 03 May 2025 00:56:59 +0000 (0:00:00.266) 0:00:37.008 ********** 2025-05-03 00:58:32.933734 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-03 00:58:32.933744 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:58:32.933754 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:58:32.933764 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-03 00:58:32.933775 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-03 00:58:32.933785 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-03 00:58:32.933795 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-03 00:58:32.933805 | orchestrator | 2025-05-03 00:58:32.933815 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-03 00:58:32.933825 | orchestrator | Saturday 03 May 2025 00:57:00 +0000 (0:00:00.889) 0:00:37.898 ********** 2025-05-03 00:58:32.933835 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-03 00:58:32.933845 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:58:32.933855 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:58:32.933865 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-03 00:58:32.933875 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-03 00:58:32.933885 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-03 00:58:32.933895 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-03 00:58:32.933905 | orchestrator | 2025-05-03 00:58:32.933916 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-03 00:58:32.933926 | orchestrator | Saturday 03 May 2025 00:57:02 +0000 (0:00:01.605) 0:00:39.503 ********** 2025-05-03 00:58:32.933936 | orchestrator | skipping: [testbed-node-3] 2025-05-03 00:58:32.933946 | orchestrator | skipping: [testbed-node-4] 2025-05-03 00:58:32.933956 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-03 00:58:32.933966 | orchestrator | 2025-05-03 00:58:32.933977 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-03 00:58:32.933995 | orchestrator | Saturday 03 May 2025 00:57:02 +0000 (0:00:00.471) 0:00:39.975 ********** 2025-05-03 00:58:32.934008 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-03 00:58:32.934044 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-03 00:58:32.934056 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-03 00:58:32.934189 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-03 00:58:32.934220 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-03 00:58:32.934231 | orchestrator | 2025-05-03 00:58:32.934242 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-03 00:58:32.934250 | orchestrator | Saturday 03 May 2025 00:57:43 +0000 (0:00:40.448) 0:01:20.423 ********** 2025-05-03 00:58:32.934259 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934285 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934294 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934302 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934311 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-03 00:58:32.934320 | orchestrator | 2025-05-03 00:58:32.934328 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-03 00:58:32.934337 | orchestrator | Saturday 03 May 2025 00:58:03 +0000 (0:00:20.134) 0:01:40.558 ********** 2025-05-03 00:58:32.934345 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934354 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934362 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934371 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934380 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934389 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934397 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-03 00:58:32.934406 | orchestrator | 2025-05-03 00:58:32.934415 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-03 00:58:32.934423 | orchestrator | Saturday 03 May 2025 00:58:12 +0000 (0:00:09.770) 0:01:50.328 ********** 2025-05-03 00:58:32.934432 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934441 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-03 00:58:32.934449 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-03 00:58:32.934458 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934466 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-03 00:58:32.934475 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-03 00:58:32.934483 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934492 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-03 00:58:32.934501 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-03 00:58:32.934517 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:32.934526 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-03 00:58:32.934542 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-03 00:58:35.991592 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:35.991740 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-03 00:58:35.991762 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-03 00:58:35.991778 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-03 00:58:35.991794 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-03 00:58:35.991809 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-03 00:58:35.991825 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-03 00:58:35.991840 | orchestrator | 2025-05-03 00:58:35.991856 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:58:35.991873 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-03 00:58:35.991889 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-03 00:58:35.991905 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-03 00:58:35.991920 | orchestrator | 2025-05-03 00:58:35.991935 | orchestrator | 2025-05-03 00:58:35.991949 | orchestrator | 2025-05-03 00:58:35.991964 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:58:35.991979 | orchestrator | Saturday 03 May 2025 00:58:31 +0000 (0:00:18.161) 0:02:08.489 ********** 2025-05-03 00:58:35.991994 | orchestrator | =============================================================================== 2025-05-03 00:58:35.992009 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.45s 2025-05-03 00:58:35.992023 | orchestrator | generate keys ---------------------------------------------------------- 20.13s 2025-05-03 00:58:35.992038 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.16s 2025-05-03 00:58:35.992053 | orchestrator | get keys from monitors -------------------------------------------------- 9.77s 2025-05-03 00:58:35.992100 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.35s 2025-05-03 00:58:35.992117 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.25s 2025-05-03 00:58:35.992134 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.61s 2025-05-03 00:58:35.992150 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.47s 2025-05-03 00:58:35.992166 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.33s 2025-05-03 00:58:35.992183 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.10s 2025-05-03 00:58:35.992199 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.93s 2025-05-03 00:58:35.992222 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.89s 2025-05-03 00:58:35.992375 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.89s 2025-05-03 00:58:35.992398 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.84s 2025-05-03 00:58:35.992414 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.79s 2025-05-03 00:58:35.992430 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.78s 2025-05-03 00:58:35.992446 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.77s 2025-05-03 00:58:35.992485 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.75s 2025-05-03 00:58:35.992499 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2025-05-03 00:58:35.992513 | orchestrator | ceph-facts : set_fact _radosgw_address to radosgw_address --------------- 0.66s 2025-05-03 00:58:35.992527 | orchestrator | 2025-05-03 00:58:32 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:35.992542 | orchestrator | 2025-05-03 00:58:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:35.992556 | orchestrator | 2025-05-03 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:35.992588 | orchestrator | 2025-05-03 00:58:35 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:35.993437 | orchestrator | 2025-05-03 00:58:35 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:35.993469 | orchestrator | 2025-05-03 00:58:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:39.048437 | orchestrator | 2025-05-03 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:39.048616 | orchestrator | 2025-05-03 00:58:39 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:39.049196 | orchestrator | 2025-05-03 00:58:39 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:39.059754 | orchestrator | 2025-05-03 00:58:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:42.126211 | orchestrator | 2025-05-03 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:42.126361 | orchestrator | 2025-05-03 00:58:42 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:42.128898 | orchestrator | 2025-05-03 00:58:42 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:42.130728 | orchestrator | 2025-05-03 00:58:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:42.132661 | orchestrator | 2025-05-03 00:58:42 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:58:45.190956 | orchestrator | 2025-05-03 00:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:45.191176 | orchestrator | 2025-05-03 00:58:45 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:45.192249 | orchestrator | 2025-05-03 00:58:45 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:45.193523 | orchestrator | 2025-05-03 00:58:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:45.194914 | orchestrator | 2025-05-03 00:58:45 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:58:48.256901 | orchestrator | 2025-05-03 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:48.257041 | orchestrator | 2025-05-03 00:58:48 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:48.258394 | orchestrator | 2025-05-03 00:58:48 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:48.259733 | orchestrator | 2025-05-03 00:58:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:48.260305 | orchestrator | 2025-05-03 00:58:48 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:58:51.342488 | orchestrator | 2025-05-03 00:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:51.342598 | orchestrator | 2025-05-03 00:58:51 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:51.343809 | orchestrator | 2025-05-03 00:58:51 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:51.345098 | orchestrator | 2025-05-03 00:58:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:51.346711 | orchestrator | 2025-05-03 00:58:51 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:58:51.346997 | orchestrator | 2025-05-03 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:54.413185 | orchestrator | 2025-05-03 00:58:54 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:54.415155 | orchestrator | 2025-05-03 00:58:54 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:54.420884 | orchestrator | 2025-05-03 00:58:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:54.421403 | orchestrator | 2025-05-03 00:58:54 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:58:57.470856 | orchestrator | 2025-05-03 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:58:57.471081 | orchestrator | 2025-05-03 00:58:57 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:58:57.471697 | orchestrator | 2025-05-03 00:58:57 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:58:57.473186 | orchestrator | 2025-05-03 00:58:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:58:57.474158 | orchestrator | 2025-05-03 00:58:57 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:59:00.533308 | orchestrator | 2025-05-03 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:00.533450 | orchestrator | 2025-05-03 00:59:00 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:59:00.534999 | orchestrator | 2025-05-03 00:59:00 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:59:00.536551 | orchestrator | 2025-05-03 00:59:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:00.539106 | orchestrator | 2025-05-03 00:59:00 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:59:03.607864 | orchestrator | 2025-05-03 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:03.607992 | orchestrator | 2025-05-03 00:59:03 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:59:03.609866 | orchestrator | 2025-05-03 00:59:03 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state STARTED 2025-05-03 00:59:03.611708 | orchestrator | 2025-05-03 00:59:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:03.613884 | orchestrator | 2025-05-03 00:59:03 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:59:06.665476 | orchestrator | 2025-05-03 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:06.665627 | orchestrator | 2025-05-03 00:59:06 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:06.666558 | orchestrator | 2025-05-03 00:59:06 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:59:06.666602 | orchestrator | 2025-05-03 00:59:06 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:06.668391 | orchestrator | 2025-05-03 00:59:06 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:06.668489 | orchestrator | 2025-05-03 00:59:06 | INFO  | Task 92ea823a-db0f-4096-8a0e-5a3a449437b3 is in state SUCCESS 2025-05-03 00:59:06.672893 | orchestrator | 2025-05-03 00:59:06.672950 | orchestrator | 2025-05-03 00:59:06.672966 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 00:59:06.672981 | orchestrator | 2025-05-03 00:59:06.672996 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 00:59:06.673010 | orchestrator | Saturday 03 May 2025 00:56:33 +0000 (0:00:00.306) 0:00:00.306 ********** 2025-05-03 00:59:06.673056 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:06.673075 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:59:06.673089 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:59:06.673103 | orchestrator | 2025-05-03 00:59:06.673131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 00:59:06.673146 | orchestrator | Saturday 03 May 2025 00:56:34 +0000 (0:00:00.418) 0:00:00.725 ********** 2025-05-03 00:59:06.673160 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-03 00:59:06.673174 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-03 00:59:06.673187 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-03 00:59:06.673211 | orchestrator | 2025-05-03 00:59:06.673235 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-03 00:59:06.673258 | orchestrator | 2025-05-03 00:59:06.673281 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-03 00:59:06.673305 | orchestrator | Saturday 03 May 2025 00:56:34 +0000 (0:00:00.348) 0:00:01.074 ********** 2025-05-03 00:59:06.673328 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:59:06.674408 | orchestrator | 2025-05-03 00:59:06.674462 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-03 00:59:06.674480 | orchestrator | Saturday 03 May 2025 00:56:35 +0000 (0:00:00.885) 0:00:01.959 ********** 2025-05-03 00:59:06.674499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.674520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.674686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.674711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.674728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.674743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.674758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.674775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.674800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.674814 | orchestrator | 2025-05-03 00:59:06.674829 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-03 00:59:06.674851 | orchestrator | Saturday 03 May 2025 00:56:37 +0000 (0:00:02.328) 0:00:04.288 ********** 2025-05-03 00:59:06.674866 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-03 00:59:06.674881 | orchestrator | 2025-05-03 00:59:06.674895 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-03 00:59:06.674909 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:00.580) 0:00:04.869 ********** 2025-05-03 00:59:06.674923 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:06.674938 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:59:06.674952 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:59:06.674966 | orchestrator | 2025-05-03 00:59:06.674980 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-03 00:59:06.674994 | orchestrator | Saturday 03 May 2025 00:56:38 +0000 (0:00:00.506) 0:00:05.375 ********** 2025-05-03 00:59:06.675007 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 00:59:06.675071 | orchestrator | 2025-05-03 00:59:06.675087 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-03 00:59:06.675101 | orchestrator | Saturday 03 May 2025 00:56:39 +0000 (0:00:00.414) 0:00:05.789 ********** 2025-05-03 00:59:06.675115 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:59:06.675129 | orchestrator | 2025-05-03 00:59:06.675143 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-03 00:59:06.675157 | orchestrator | Saturday 03 May 2025 00:56:39 +0000 (0:00:00.718) 0:00:06.507 ********** 2025-05-03 00:59:06.675171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.675187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.675223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.675239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.675255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.675269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.675284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.675305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.675320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.675335 | orchestrator | 2025-05-03 00:59:06.675349 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-03 00:59:06.675364 | orchestrator | Saturday 03 May 2025 00:56:43 +0000 (0:00:03.406) 0:00:09.914 ********** 2025-05-03 00:59:06.675386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:59:06.675402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.675417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:59:06.675438 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.675453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:59:06.675469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.675493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:59:06.675508 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.675523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:59:06.675539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.675560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:59:06.675574 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.675589 | orchestrator | 2025-05-03 00:59:06.675603 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-03 00:59:06.675617 | orchestrator | Saturday 03 May 2025 00:56:44 +0000 (0:00:01.119) 0:00:11.033 ********** 2025-05-03 00:59:06.675632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:59:06.675655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.675697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:59:06.675713 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.675727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:59:06.675751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.675766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:59:06.675780 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.675802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-03 00:59:06.675818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.675833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-03 00:59:06.675856 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.675870 | orchestrator | 2025-05-03 00:59:06.675885 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-03 00:59:06.675899 | orchestrator | Saturday 03 May 2025 00:56:45 +0000 (0:00:01.316) 0:00:12.350 ********** 2025-05-03 00:59:06.675914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.675929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.675952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.675969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.675990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676101 | orchestrator | 2025-05-03 00:59:06.676116 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-03 00:59:06.676130 | orchestrator | Saturday 03 May 2025 00:56:49 +0000 (0:00:03.620) 0:00:15.970 ********** 2025-05-03 00:59:06.676152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.676168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.676183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.676198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.676220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.676243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.676259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676303 | orchestrator | 2025-05-03 00:59:06.676317 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-03 00:59:06.676331 | orchestrator | Saturday 03 May 2025 00:56:57 +0000 (0:00:07.771) 0:00:23.742 ********** 2025-05-03 00:59:06.676345 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:59:06.676360 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.676374 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:59:06.676388 | orchestrator | 2025-05-03 00:59:06.676402 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-03 00:59:06.676416 | orchestrator | Saturday 03 May 2025 00:56:59 +0000 (0:00:02.369) 0:00:26.111 ********** 2025-05-03 00:59:06.676430 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.676444 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.676459 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.676488 | orchestrator | 2025-05-03 00:59:06.676509 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-03 00:59:06.676524 | orchestrator | Saturday 03 May 2025 00:57:00 +0000 (0:00:01.045) 0:00:27.157 ********** 2025-05-03 00:59:06.676545 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.676561 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.676575 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.676590 | orchestrator | 2025-05-03 00:59:06.676609 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-03 00:59:06.676623 | orchestrator | Saturday 03 May 2025 00:57:00 +0000 (0:00:00.350) 0:00:27.508 ********** 2025-05-03 00:59:06.676637 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.676651 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.676665 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.676679 | orchestrator | 2025-05-03 00:59:06.676692 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-03 00:59:06.676706 | orchestrator | Saturday 03 May 2025 00:57:01 +0000 (0:00:00.343) 0:00:27.851 ********** 2025-05-03 00:59:06.676721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.676737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.676752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.676768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.676803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.676819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-03 00:59:06.676834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.676878 | orchestrator | 2025-05-03 00:59:06.676892 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-03 00:59:06.676912 | orchestrator | Saturday 03 May 2025 00:57:03 +0000 (0:00:02.351) 0:00:30.203 ********** 2025-05-03 00:59:06.676927 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.676941 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.676955 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.676969 | orchestrator | 2025-05-03 00:59:06.676983 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-03 00:59:06.676997 | orchestrator | Saturday 03 May 2025 00:57:03 +0000 (0:00:00.222) 0:00:30.425 ********** 2025-05-03 00:59:06.677011 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-03 00:59:06.677054 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-03 00:59:06.677091 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-03 00:59:06.677118 | orchestrator | 2025-05-03 00:59:06.677138 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-03 00:59:06.677152 | orchestrator | Saturday 03 May 2025 00:57:05 +0000 (0:00:01.909) 0:00:32.335 ********** 2025-05-03 00:59:06.677166 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 00:59:06.677179 | orchestrator | 2025-05-03 00:59:06.677193 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-03 00:59:06.677207 | orchestrator | Saturday 03 May 2025 00:57:06 +0000 (0:00:00.603) 0:00:32.938 ********** 2025-05-03 00:59:06.677221 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.677235 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.677249 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.677262 | orchestrator | 2025-05-03 00:59:06.677276 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-03 00:59:06.677290 | orchestrator | Saturday 03 May 2025 00:57:07 +0000 (0:00:00.986) 0:00:33.925 ********** 2025-05-03 00:59:06.677304 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-03 00:59:06.677318 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-03 00:59:06.677332 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 00:59:06.677346 | orchestrator | 2025-05-03 00:59:06.677360 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-03 00:59:06.677373 | orchestrator | Saturday 03 May 2025 00:57:08 +0000 (0:00:01.219) 0:00:35.144 ********** 2025-05-03 00:59:06.677387 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:06.677401 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:59:06.677415 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:59:06.677429 | orchestrator | 2025-05-03 00:59:06.677442 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-03 00:59:06.677456 | orchestrator | Saturday 03 May 2025 00:57:08 +0000 (0:00:00.312) 0:00:35.457 ********** 2025-05-03 00:59:06.677470 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-03 00:59:06.677484 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-03 00:59:06.677498 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-03 00:59:06.677512 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-03 00:59:06.677526 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-03 00:59:06.677540 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-03 00:59:06.677555 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-03 00:59:06.677569 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-03 00:59:06.677583 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-03 00:59:06.677605 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-03 00:59:06.677619 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-03 00:59:06.677633 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-03 00:59:06.677647 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-03 00:59:06.677661 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-03 00:59:06.677675 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-03 00:59:06.677689 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-03 00:59:06.677703 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-03 00:59:06.677717 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-03 00:59:06.677731 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-03 00:59:06.677744 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-03 00:59:06.677758 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-03 00:59:06.677772 | orchestrator | 2025-05-03 00:59:06.677786 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-03 00:59:06.677799 | orchestrator | Saturday 03 May 2025 00:57:19 +0000 (0:00:10.489) 0:00:45.946 ********** 2025-05-03 00:59:06.677813 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-03 00:59:06.677827 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-03 00:59:06.677841 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-03 00:59:06.677855 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-03 00:59:06.677868 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-03 00:59:06.677889 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-03 00:59:06.677903 | orchestrator | 2025-05-03 00:59:06.677923 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-03 00:59:06.677938 | orchestrator | Saturday 03 May 2025 00:57:22 +0000 (0:00:03.359) 0:00:49.306 ********** 2025-05-03 00:59:06.677952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.677968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.677991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-03 00:59:06.678007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.678109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.678126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-03 00:59:06.678141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.678165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.678180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-03 00:59:06.678195 | orchestrator | 2025-05-03 00:59:06.678209 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-03 00:59:06.678224 | orchestrator | Saturday 03 May 2025 00:57:25 +0000 (0:00:02.762) 0:00:52.068 ********** 2025-05-03 00:59:06.678238 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.678252 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.678267 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.678281 | orchestrator | 2025-05-03 00:59:06.678295 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-03 00:59:06.678309 | orchestrator | Saturday 03 May 2025 00:57:25 +0000 (0:00:00.321) 0:00:52.390 ********** 2025-05-03 00:59:06.678323 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.678337 | orchestrator | 2025-05-03 00:59:06.678351 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-03 00:59:06.678365 | orchestrator | Saturday 03 May 2025 00:57:28 +0000 (0:00:02.421) 0:00:54.811 ********** 2025-05-03 00:59:06.678379 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.678393 | orchestrator | 2025-05-03 00:59:06.678407 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-03 00:59:06.678421 | orchestrator | Saturday 03 May 2025 00:57:30 +0000 (0:00:02.227) 0:00:57.039 ********** 2025-05-03 00:59:06.678435 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:59:06.678449 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:06.678463 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:59:06.678477 | orchestrator | 2025-05-03 00:59:06.678491 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-03 00:59:06.678505 | orchestrator | Saturday 03 May 2025 00:57:31 +0000 (0:00:00.944) 0:00:57.983 ********** 2025-05-03 00:59:06.678519 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:06.678539 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:59:06.678554 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:59:06.678567 | orchestrator | 2025-05-03 00:59:06.678582 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-03 00:59:06.678597 | orchestrator | Saturday 03 May 2025 00:57:31 +0000 (0:00:00.362) 0:00:58.346 ********** 2025-05-03 00:59:06.678611 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.678625 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:06.678640 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:06.678661 | orchestrator | 2025-05-03 00:59:06.678676 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-03 00:59:06.678690 | orchestrator | Saturday 03 May 2025 00:57:32 +0000 (0:00:00.574) 0:00:58.920 ********** 2025-05-03 00:59:06.678704 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.678718 | orchestrator | 2025-05-03 00:59:06.678731 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-03 00:59:06.678745 | orchestrator | Saturday 03 May 2025 00:57:45 +0000 (0:00:13.120) 0:01:12.041 ********** 2025-05-03 00:59:06.678759 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.678773 | orchestrator | 2025-05-03 00:59:06.678787 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-03 00:59:06.678801 | orchestrator | Saturday 03 May 2025 00:57:54 +0000 (0:00:08.934) 0:01:20.976 ********** 2025-05-03 00:59:06.678815 | orchestrator | 2025-05-03 00:59:06.678829 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-03 00:59:06.678843 | orchestrator | Saturday 03 May 2025 00:57:54 +0000 (0:00:00.056) 0:01:21.032 ********** 2025-05-03 00:59:06.678857 | orchestrator | 2025-05-03 00:59:06.678871 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-03 00:59:06.678885 | orchestrator | Saturday 03 May 2025 00:57:54 +0000 (0:00:00.053) 0:01:21.086 ********** 2025-05-03 00:59:06.678899 | orchestrator | 2025-05-03 00:59:06.678913 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-03 00:59:06.678927 | orchestrator | Saturday 03 May 2025 00:57:54 +0000 (0:00:00.055) 0:01:21.141 ********** 2025-05-03 00:59:06.678941 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.678954 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:59:06.678969 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:59:06.678982 | orchestrator | 2025-05-03 00:59:06.678997 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-03 00:59:06.679011 | orchestrator | Saturday 03 May 2025 00:58:04 +0000 (0:00:09.512) 0:01:30.653 ********** 2025-05-03 00:59:06.679050 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:59:06.679065 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:59:06.679079 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.679093 | orchestrator | 2025-05-03 00:59:06.679107 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-03 00:59:06.679121 | orchestrator | Saturday 03 May 2025 00:58:11 +0000 (0:00:07.649) 0:01:38.303 ********** 2025-05-03 00:59:06.679135 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.679149 | orchestrator | changed: [testbed-node-2] 2025-05-03 00:59:06.679163 | orchestrator | changed: [testbed-node-1] 2025-05-03 00:59:06.679177 | orchestrator | 2025-05-03 00:59:06.679191 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-03 00:59:06.679205 | orchestrator | Saturday 03 May 2025 00:58:22 +0000 (0:00:10.605) 0:01:48.909 ********** 2025-05-03 00:59:06.679219 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 00:59:06.679233 | orchestrator | 2025-05-03 00:59:06.679253 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-03 00:59:06.679267 | orchestrator | Saturday 03 May 2025 00:58:23 +0000 (0:00:00.744) 0:01:49.653 ********** 2025-05-03 00:59:06.679281 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:06.679295 | orchestrator | ok: [testbed-node-1] 2025-05-03 00:59:06.679310 | orchestrator | ok: [testbed-node-2] 2025-05-03 00:59:06.679323 | orchestrator | 2025-05-03 00:59:06.679338 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-03 00:59:06.679351 | orchestrator | Saturday 03 May 2025 00:58:24 +0000 (0:00:01.019) 0:01:50.672 ********** 2025-05-03 00:59:06.679366 | orchestrator | changed: [testbed-node-0] 2025-05-03 00:59:06.679379 | orchestrator | 2025-05-03 00:59:06.679393 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-03 00:59:06.679407 | orchestrator | Saturday 03 May 2025 00:58:25 +0000 (0:00:01.518) 0:01:52.190 ********** 2025-05-03 00:59:06.679428 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-03 00:59:06.679443 | orchestrator | 2025-05-03 00:59:06.679457 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-03 00:59:06.679471 | orchestrator | Saturday 03 May 2025 00:58:34 +0000 (0:00:08.884) 0:02:01.075 ********** 2025-05-03 00:59:06.679485 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-03 00:59:06.679499 | orchestrator | 2025-05-03 00:59:06.679513 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-03 00:59:06.679528 | orchestrator | Saturday 03 May 2025 00:58:53 +0000 (0:00:19.296) 0:02:20.371 ********** 2025-05-03 00:59:06.679542 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-03 00:59:06.679556 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-03 00:59:06.679570 | orchestrator | 2025-05-03 00:59:06.679585 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-03 00:59:06.679599 | orchestrator | Saturday 03 May 2025 00:59:00 +0000 (0:00:07.074) 0:02:27.446 ********** 2025-05-03 00:59:06.679612 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.679626 | orchestrator | 2025-05-03 00:59:06.679640 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-03 00:59:06.679655 | orchestrator | Saturday 03 May 2025 00:59:00 +0000 (0:00:00.130) 0:02:27.577 ********** 2025-05-03 00:59:06.679669 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:06.679689 | orchestrator | 2025-05-03 00:59:06.679703 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-03 00:59:06.679724 | orchestrator | Saturday 03 May 2025 00:59:01 +0000 (0:00:00.119) 0:02:27.696 ********** 2025-05-03 00:59:09.715117 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:09.715228 | orchestrator | 2025-05-03 00:59:09.715250 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-03 00:59:09.715266 | orchestrator | Saturday 03 May 2025 00:59:01 +0000 (0:00:00.132) 0:02:27.828 ********** 2025-05-03 00:59:09.715281 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:09.715295 | orchestrator | 2025-05-03 00:59:09.715309 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-03 00:59:09.715323 | orchestrator | Saturday 03 May 2025 00:59:01 +0000 (0:00:00.425) 0:02:28.253 ********** 2025-05-03 00:59:09.715338 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:09.715352 | orchestrator | 2025-05-03 00:59:09.715366 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-03 00:59:09.715381 | orchestrator | Saturday 03 May 2025 00:59:04 +0000 (0:00:03.219) 0:02:31.473 ********** 2025-05-03 00:59:09.715395 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:09.715408 | orchestrator | skipping: [testbed-node-1] 2025-05-03 00:59:09.715422 | orchestrator | skipping: [testbed-node-2] 2025-05-03 00:59:09.715436 | orchestrator | 2025-05-03 00:59:09.715450 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:59:09.715465 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-03 00:59:09.715480 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-03 00:59:09.715495 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-03 00:59:09.715509 | orchestrator | 2025-05-03 00:59:09.715523 | orchestrator | 2025-05-03 00:59:09.715537 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:59:09.715551 | orchestrator | Saturday 03 May 2025 00:59:05 +0000 (0:00:00.509) 0:02:31.982 ********** 2025-05-03 00:59:09.715565 | orchestrator | =============================================================================== 2025-05-03 00:59:09.715605 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.30s 2025-05-03 00:59:09.715622 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.12s 2025-05-03 00:59:09.715638 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.61s 2025-05-03 00:59:09.715655 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.49s 2025-05-03 00:59:09.715673 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.51s 2025-05-03 00:59:09.715690 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.93s 2025-05-03 00:59:09.715720 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.88s 2025-05-03 00:59:09.715737 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.77s 2025-05-03 00:59:09.715753 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.65s 2025-05-03 00:59:09.715769 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.07s 2025-05-03 00:59:09.715784 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.62s 2025-05-03 00:59:09.715800 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2025-05-03 00:59:09.715815 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.36s 2025-05-03 00:59:09.715831 | orchestrator | keystone : Creating default user role ----------------------------------- 3.22s 2025-05-03 00:59:09.715847 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.76s 2025-05-03 00:59:09.715862 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.42s 2025-05-03 00:59:09.715878 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.37s 2025-05-03 00:59:09.715894 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.35s 2025-05-03 00:59:09.715910 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.33s 2025-05-03 00:59:09.715926 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.23s 2025-05-03 00:59:09.715943 | orchestrator | 2025-05-03 00:59:06 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:09.715957 | orchestrator | 2025-05-03 00:59:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:09.715971 | orchestrator | 2025-05-03 00:59:06 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:59:09.715985 | orchestrator | 2025-05-03 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:09.716042 | orchestrator | 2025-05-03 00:59:09 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:09.716207 | orchestrator | 2025-05-03 00:59:09 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:59:09.716318 | orchestrator | 2025-05-03 00:59:09 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:09.719323 | orchestrator | 2025-05-03 00:59:09 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:09.719809 | orchestrator | 2025-05-03 00:59:09 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:09.720444 | orchestrator | 2025-05-03 00:59:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:09.721198 | orchestrator | 2025-05-03 00:59:09 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state STARTED 2025-05-03 00:59:12.772424 | orchestrator | 2025-05-03 00:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:12.772501 | orchestrator | 2025-05-03 00:59:12 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:12.773901 | orchestrator | 2025-05-03 00:59:12 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state STARTED 2025-05-03 00:59:12.773934 | orchestrator | 2025-05-03 00:59:12 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:12.776380 | orchestrator | 2025-05-03 00:59:12 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:12.779060 | orchestrator | 2025-05-03 00:59:12 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:12.781053 | orchestrator | 2025-05-03 00:59:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:12.783727 | orchestrator | 2025-05-03 00:59:12 | INFO  | Task 22a029ed-28b3-4877-9bd6-ad9b75561747 is in state SUCCESS 2025-05-03 00:59:12.785077 | orchestrator | 2025-05-03 00:59:12.785124 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-03 00:59:12.785151 | orchestrator | 2025-05-03 00:59:12.785173 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-03 00:59:12.785242 | orchestrator | 2025-05-03 00:59:12.785259 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-03 00:59:12.785273 | orchestrator | Saturday 03 May 2025 00:58:43 +0000 (0:00:00.460) 0:00:00.460 ********** 2025-05-03 00:59:12.785288 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-03 00:59:12.785303 | orchestrator | 2025-05-03 00:59:12.785317 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-03 00:59:12.785331 | orchestrator | Saturday 03 May 2025 00:58:43 +0000 (0:00:00.205) 0:00:00.666 ********** 2025-05-03 00:59:12.785345 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:59:12.785454 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-03 00:59:12.785507 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-03 00:59:12.785523 | orchestrator | 2025-05-03 00:59:12.785537 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-03 00:59:12.785551 | orchestrator | Saturday 03 May 2025 00:58:44 +0000 (0:00:00.901) 0:00:01.567 ********** 2025-05-03 00:59:12.785565 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-03 00:59:12.785610 | orchestrator | 2025-05-03 00:59:12.785625 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-03 00:59:12.785639 | orchestrator | Saturday 03 May 2025 00:58:44 +0000 (0:00:00.247) 0:00:01.815 ********** 2025-05-03 00:59:12.785653 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.785667 | orchestrator | 2025-05-03 00:59:12.785681 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-03 00:59:12.785695 | orchestrator | Saturday 03 May 2025 00:58:45 +0000 (0:00:00.590) 0:00:02.405 ********** 2025-05-03 00:59:12.785709 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.785723 | orchestrator | 2025-05-03 00:59:12.785737 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-03 00:59:12.785750 | orchestrator | Saturday 03 May 2025 00:58:45 +0000 (0:00:00.170) 0:00:02.575 ********** 2025-05-03 00:59:12.785764 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.785778 | orchestrator | 2025-05-03 00:59:12.785792 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-03 00:59:12.785805 | orchestrator | Saturday 03 May 2025 00:58:46 +0000 (0:00:00.447) 0:00:03.023 ********** 2025-05-03 00:59:12.785819 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.785833 | orchestrator | 2025-05-03 00:59:12.785858 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-03 00:59:12.785873 | orchestrator | Saturday 03 May 2025 00:58:46 +0000 (0:00:00.141) 0:00:03.164 ********** 2025-05-03 00:59:12.785887 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.785901 | orchestrator | 2025-05-03 00:59:12.785915 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-03 00:59:12.785947 | orchestrator | Saturday 03 May 2025 00:58:46 +0000 (0:00:00.126) 0:00:03.291 ********** 2025-05-03 00:59:12.785961 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.785975 | orchestrator | 2025-05-03 00:59:12.785989 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-03 00:59:12.786003 | orchestrator | Saturday 03 May 2025 00:58:46 +0000 (0:00:00.152) 0:00:03.444 ********** 2025-05-03 00:59:12.786123 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.786149 | orchestrator | 2025-05-03 00:59:12.786167 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-03 00:59:12.786183 | orchestrator | Saturday 03 May 2025 00:58:46 +0000 (0:00:00.154) 0:00:03.598 ********** 2025-05-03 00:59:12.786199 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.786253 | orchestrator | 2025-05-03 00:59:12.786270 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-03 00:59:12.786286 | orchestrator | Saturday 03 May 2025 00:58:46 +0000 (0:00:00.288) 0:00:03.887 ********** 2025-05-03 00:59:12.786302 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:59:12.786318 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:59:12.786333 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:59:12.786348 | orchestrator | 2025-05-03 00:59:12.786364 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-03 00:59:12.786379 | orchestrator | Saturday 03 May 2025 00:58:47 +0000 (0:00:00.697) 0:00:04.585 ********** 2025-05-03 00:59:12.786422 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.786441 | orchestrator | 2025-05-03 00:59:12.786456 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-03 00:59:12.786470 | orchestrator | Saturday 03 May 2025 00:58:47 +0000 (0:00:00.258) 0:00:04.843 ********** 2025-05-03 00:59:12.786484 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:59:12.786499 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:59:12.786513 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:59:12.786527 | orchestrator | 2025-05-03 00:59:12.786540 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-03 00:59:12.786555 | orchestrator | Saturday 03 May 2025 00:58:49 +0000 (0:00:01.925) 0:00:06.769 ********** 2025-05-03 00:59:12.786569 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:59:12.786583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:59:12.786597 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:59:12.786611 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.786653 | orchestrator | 2025-05-03 00:59:12.786668 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-03 00:59:12.786695 | orchestrator | Saturday 03 May 2025 00:58:50 +0000 (0:00:00.435) 0:00:07.204 ********** 2025-05-03 00:59:12.786717 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-03 00:59:12.786734 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-03 00:59:12.786748 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-03 00:59:12.786763 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.786777 | orchestrator | 2025-05-03 00:59:12.786801 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-03 00:59:12.786815 | orchestrator | Saturday 03 May 2025 00:58:51 +0000 (0:00:00.880) 0:00:08.085 ********** 2025-05-03 00:59:12.786830 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:59:12.786845 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:59:12.786860 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-03 00:59:12.786875 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.786889 | orchestrator | 2025-05-03 00:59:12.786903 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-03 00:59:12.786917 | orchestrator | Saturday 03 May 2025 00:58:51 +0000 (0:00:00.189) 0:00:08.274 ********** 2025-05-03 00:59:12.786934 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '33bba94c896d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-03 00:58:48.591745', 'end': '2025-05-03 00:58:48.631366', 'delta': '0:00:00.039621', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['33bba94c896d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-03 00:59:12.786986 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'de0c6f38c246', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-03 00:58:49.126534', 'end': '2025-05-03 00:58:49.170996', 'delta': '0:00:00.044462', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de0c6f38c246'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-03 00:59:12.787036 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'd2bb2bc0e317', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-03 00:58:49.679699', 'end': '2025-05-03 00:58:49.711718', 'delta': '0:00:00.032019', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d2bb2bc0e317'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-03 00:59:12.787062 | orchestrator | 2025-05-03 00:59:12.787077 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-03 00:59:12.787091 | orchestrator | Saturday 03 May 2025 00:58:51 +0000 (0:00:00.193) 0:00:08.468 ********** 2025-05-03 00:59:12.787105 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.787119 | orchestrator | 2025-05-03 00:59:12.787133 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-03 00:59:12.787147 | orchestrator | Saturday 03 May 2025 00:58:51 +0000 (0:00:00.272) 0:00:08.741 ********** 2025-05-03 00:59:12.787162 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-03 00:59:12.787176 | orchestrator | 2025-05-03 00:59:12.787190 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-03 00:59:12.787204 | orchestrator | Saturday 03 May 2025 00:58:53 +0000 (0:00:01.545) 0:00:10.287 ********** 2025-05-03 00:59:12.787218 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787232 | orchestrator | 2025-05-03 00:59:12.787251 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-03 00:59:12.787266 | orchestrator | Saturday 03 May 2025 00:58:53 +0000 (0:00:00.151) 0:00:10.438 ********** 2025-05-03 00:59:12.787280 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787293 | orchestrator | 2025-05-03 00:59:12.787307 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-03 00:59:12.787321 | orchestrator | Saturday 03 May 2025 00:58:53 +0000 (0:00:00.239) 0:00:10.677 ********** 2025-05-03 00:59:12.787335 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787349 | orchestrator | 2025-05-03 00:59:12.787363 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-03 00:59:12.787377 | orchestrator | Saturday 03 May 2025 00:58:53 +0000 (0:00:00.130) 0:00:10.807 ********** 2025-05-03 00:59:12.787391 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.787405 | orchestrator | 2025-05-03 00:59:12.787419 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-03 00:59:12.787433 | orchestrator | Saturday 03 May 2025 00:58:54 +0000 (0:00:00.149) 0:00:10.956 ********** 2025-05-03 00:59:12.787447 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787461 | orchestrator | 2025-05-03 00:59:12.787475 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-03 00:59:12.787489 | orchestrator | Saturday 03 May 2025 00:58:54 +0000 (0:00:00.322) 0:00:11.279 ********** 2025-05-03 00:59:12.787503 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787516 | orchestrator | 2025-05-03 00:59:12.787530 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-03 00:59:12.787544 | orchestrator | Saturday 03 May 2025 00:58:54 +0000 (0:00:00.128) 0:00:11.407 ********** 2025-05-03 00:59:12.787558 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787572 | orchestrator | 2025-05-03 00:59:12.787586 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-03 00:59:12.787600 | orchestrator | Saturday 03 May 2025 00:58:54 +0000 (0:00:00.134) 0:00:11.542 ********** 2025-05-03 00:59:12.787614 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787628 | orchestrator | 2025-05-03 00:59:12.787641 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-03 00:59:12.787655 | orchestrator | Saturday 03 May 2025 00:58:54 +0000 (0:00:00.113) 0:00:11.656 ********** 2025-05-03 00:59:12.787669 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787683 | orchestrator | 2025-05-03 00:59:12.787697 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-03 00:59:12.787711 | orchestrator | Saturday 03 May 2025 00:58:54 +0000 (0:00:00.129) 0:00:11.785 ********** 2025-05-03 00:59:12.787725 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787739 | orchestrator | 2025-05-03 00:59:12.787753 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-03 00:59:12.787767 | orchestrator | Saturday 03 May 2025 00:58:55 +0000 (0:00:00.324) 0:00:12.110 ********** 2025-05-03 00:59:12.787786 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787800 | orchestrator | 2025-05-03 00:59:12.787814 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-03 00:59:12.787828 | orchestrator | Saturday 03 May 2025 00:58:55 +0000 (0:00:00.133) 0:00:12.244 ********** 2025-05-03 00:59:12.787842 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.787856 | orchestrator | 2025-05-03 00:59:12.787870 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-03 00:59:12.787884 | orchestrator | Saturday 03 May 2025 00:58:55 +0000 (0:00:00.140) 0:00:12.385 ********** 2025-05-03 00:59:12.787898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:59:12.787920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:59:12.787936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:59:12.787951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:59:12.787970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:59:12.787985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:59:12.787999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:59:12.788034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-03 00:59:12.788069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_5da3c3bd-eee1-4827-9013-ed3efdd154fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:59:12.788087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ee89ec0-10a2-40d4-b2a3-ab6963ecc84d', 'scsi-SQEMU_QEMU_HARDDISK_8ee89ec0-10a2-40d4-b2a3-ab6963ecc84d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:59:12.788104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb822f65-6f3e-4a32-952e-d8f6f7b2a5ab', 'scsi-SQEMU_QEMU_HARDDISK_eb822f65-6f3e-4a32-952e-d8f6f7b2a5ab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:59:12.788119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efb501f0-cdfc-4df2-8f60-0563271b3e1b', 'scsi-SQEMU_QEMU_HARDDISK_efb501f0-cdfc-4df2-8f60-0563271b3e1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:59:12.788141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-03-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-03 00:59:12.788156 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.788170 | orchestrator | 2025-05-03 00:59:12.788185 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-03 00:59:12.788199 | orchestrator | Saturday 03 May 2025 00:58:55 +0000 (0:00:00.278) 0:00:12.663 ********** 2025-05-03 00:59:12.788213 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.788228 | orchestrator | 2025-05-03 00:59:12.788242 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-03 00:59:12.788256 | orchestrator | Saturday 03 May 2025 00:58:56 +0000 (0:00:00.277) 0:00:12.941 ********** 2025-05-03 00:59:12.788270 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.788284 | orchestrator | 2025-05-03 00:59:12.788298 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-03 00:59:12.788312 | orchestrator | Saturday 03 May 2025 00:58:56 +0000 (0:00:00.149) 0:00:13.091 ********** 2025-05-03 00:59:12.788326 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.788340 | orchestrator | 2025-05-03 00:59:12.788355 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-03 00:59:12.788369 | orchestrator | Saturday 03 May 2025 00:58:56 +0000 (0:00:00.142) 0:00:13.234 ********** 2025-05-03 00:59:12.788393 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.788409 | orchestrator | 2025-05-03 00:59:12.788423 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-03 00:59:12.788437 | orchestrator | Saturday 03 May 2025 00:58:56 +0000 (0:00:00.511) 0:00:13.745 ********** 2025-05-03 00:59:12.788451 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.788465 | orchestrator | 2025-05-03 00:59:12.788479 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-03 00:59:12.788494 | orchestrator | Saturday 03 May 2025 00:58:56 +0000 (0:00:00.126) 0:00:13.871 ********** 2025-05-03 00:59:12.788508 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.788522 | orchestrator | 2025-05-03 00:59:12.788535 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-03 00:59:12.788549 | orchestrator | Saturday 03 May 2025 00:58:57 +0000 (0:00:00.467) 0:00:14.339 ********** 2025-05-03 00:59:12.788563 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.788586 | orchestrator | 2025-05-03 00:59:12.788601 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-03 00:59:12.788614 | orchestrator | Saturday 03 May 2025 00:58:57 +0000 (0:00:00.414) 0:00:14.754 ********** 2025-05-03 00:59:12.788628 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.788642 | orchestrator | 2025-05-03 00:59:12.788656 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-03 00:59:12.788670 | orchestrator | Saturday 03 May 2025 00:58:58 +0000 (0:00:00.270) 0:00:15.024 ********** 2025-05-03 00:59:12.788683 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.788697 | orchestrator | 2025-05-03 00:59:12.788711 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-03 00:59:12.788725 | orchestrator | Saturday 03 May 2025 00:58:58 +0000 (0:00:00.147) 0:00:15.172 ********** 2025-05-03 00:59:12.788739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:59:12.788753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:59:12.788778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:59:12.788792 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.788807 | orchestrator | 2025-05-03 00:59:12.788821 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-03 00:59:12.788835 | orchestrator | Saturday 03 May 2025 00:58:58 +0000 (0:00:00.492) 0:00:15.664 ********** 2025-05-03 00:59:12.788849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:59:12.788863 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:59:12.788877 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:59:12.788891 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.788905 | orchestrator | 2025-05-03 00:59:12.788920 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-03 00:59:12.788933 | orchestrator | Saturday 03 May 2025 00:58:59 +0000 (0:00:00.464) 0:00:16.128 ********** 2025-05-03 00:59:12.788947 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:59:12.788962 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-03 00:59:12.788976 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-03 00:59:12.788989 | orchestrator | 2025-05-03 00:59:12.789003 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-03 00:59:12.789044 | orchestrator | Saturday 03 May 2025 00:59:00 +0000 (0:00:01.142) 0:00:17.271 ********** 2025-05-03 00:59:12.789068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:59:12.789082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:59:12.789096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:59:12.789110 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.789124 | orchestrator | 2025-05-03 00:59:12.789138 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-03 00:59:12.789152 | orchestrator | Saturday 03 May 2025 00:59:00 +0000 (0:00:00.220) 0:00:17.492 ********** 2025-05-03 00:59:12.789166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-03 00:59:12.789180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-03 00:59:12.789194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-03 00:59:12.789208 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.789222 | orchestrator | 2025-05-03 00:59:12.789236 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-03 00:59:12.789250 | orchestrator | Saturday 03 May 2025 00:59:00 +0000 (0:00:00.245) 0:00:17.737 ********** 2025-05-03 00:59:12.789264 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-03 00:59:12.789278 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-03 00:59:12.789293 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-03 00:59:12.789307 | orchestrator | 2025-05-03 00:59:12.789321 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-03 00:59:12.789335 | orchestrator | Saturday 03 May 2025 00:59:01 +0000 (0:00:00.224) 0:00:17.962 ********** 2025-05-03 00:59:12.789348 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.789363 | orchestrator | 2025-05-03 00:59:12.789377 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-03 00:59:12.789390 | orchestrator | Saturday 03 May 2025 00:59:01 +0000 (0:00:00.137) 0:00:18.099 ********** 2025-05-03 00:59:12.789405 | orchestrator | skipping: [testbed-node-0] 2025-05-03 00:59:12.789419 | orchestrator | 2025-05-03 00:59:12.789433 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-03 00:59:12.789447 | orchestrator | Saturday 03 May 2025 00:59:01 +0000 (0:00:00.319) 0:00:18.418 ********** 2025-05-03 00:59:12.789461 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:59:12.789488 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:59:12.789504 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:59:12.789518 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-03 00:59:12.789537 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-03 00:59:12.789551 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-03 00:59:12.789565 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-03 00:59:12.789579 | orchestrator | 2025-05-03 00:59:12.789593 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-03 00:59:12.789607 | orchestrator | Saturday 03 May 2025 00:59:02 +0000 (0:00:00.839) 0:00:19.258 ********** 2025-05-03 00:59:12.789621 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-03 00:59:12.789635 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-03 00:59:12.789649 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-03 00:59:12.789663 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-03 00:59:12.789677 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-03 00:59:12.789691 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-03 00:59:12.789704 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-03 00:59:12.789718 | orchestrator | 2025-05-03 00:59:12.789732 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-03 00:59:12.789746 | orchestrator | Saturday 03 May 2025 00:59:03 +0000 (0:00:01.436) 0:00:20.694 ********** 2025-05-03 00:59:12.789760 | orchestrator | ok: [testbed-node-0] 2025-05-03 00:59:12.789775 | orchestrator | 2025-05-03 00:59:12.789788 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-03 00:59:12.789802 | orchestrator | Saturday 03 May 2025 00:59:04 +0000 (0:00:00.471) 0:00:21.166 ********** 2025-05-03 00:59:12.789816 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 00:59:12.789830 | orchestrator | 2025-05-03 00:59:12.789845 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-03 00:59:12.789859 | orchestrator | Saturday 03 May 2025 00:59:04 +0000 (0:00:00.634) 0:00:21.800 ********** 2025-05-03 00:59:12.789873 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-03 00:59:12.789887 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-03 00:59:12.789901 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-03 00:59:12.789914 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-03 00:59:12.789928 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-03 00:59:12.789942 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-03 00:59:12.789956 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-03 00:59:12.789970 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-03 00:59:12.789984 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-03 00:59:12.789998 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-03 00:59:12.790070 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-03 00:59:12.790090 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-03 00:59:12.790104 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-03 00:59:12.790125 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-03 00:59:12.790139 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-03 00:59:12.790153 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-03 00:59:12.790172 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-03 00:59:12.790187 | orchestrator | 2025-05-03 00:59:12.790201 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 00:59:12.790215 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-03 00:59:12.790230 | orchestrator | 2025-05-03 00:59:12.790244 | orchestrator | 2025-05-03 00:59:12.790258 | orchestrator | 2025-05-03 00:59:12.790272 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 00:59:12.790286 | orchestrator | Saturday 03 May 2025 00:59:10 +0000 (0:00:05.799) 0:00:27.600 ********** 2025-05-03 00:59:12.790300 | orchestrator | =============================================================================== 2025-05-03 00:59:12.790314 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 5.80s 2025-05-03 00:59:12.790329 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.93s 2025-05-03 00:59:12.790343 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.55s 2025-05-03 00:59:12.790364 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.44s 2025-05-03 00:59:15.839703 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.14s 2025-05-03 00:59:15.839828 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.90s 2025-05-03 00:59:15.839849 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.88s 2025-05-03 00:59:15.839864 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.84s 2025-05-03 00:59:15.839878 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2025-05-03 00:59:15.839892 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.63s 2025-05-03 00:59:15.839906 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.59s 2025-05-03 00:59:15.839921 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.51s 2025-05-03 00:59:15.839935 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.49s 2025-05-03 00:59:15.839949 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.47s 2025-05-03 00:59:15.839962 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.47s 2025-05-03 00:59:15.839976 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.46s 2025-05-03 00:59:15.839990 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.45s 2025-05-03 00:59:15.840049 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.44s 2025-05-03 00:59:15.840068 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.41s 2025-05-03 00:59:15.840082 | orchestrator | ceph-facts : set_fact build dedicated_devices from resolved symlinks ---- 0.32s 2025-05-03 00:59:15.840096 | orchestrator | 2025-05-03 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:15.840127 | orchestrator | 2025-05-03 00:59:15 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:15.841377 | orchestrator | 2025-05-03 00:59:15 | INFO  | Task ed90f1ce-3170-4034-9f0e-e50188a0a27b is in state SUCCESS 2025-05-03 00:59:15.844461 | orchestrator | 2025-05-03 00:59:15 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:15.848303 | orchestrator | 2025-05-03 00:59:15 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:15.851098 | orchestrator | 2025-05-03 00:59:15 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:15.855133 | orchestrator | 2025-05-03 00:59:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:15.857147 | orchestrator | 2025-05-03 00:59:15 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:18.915149 | orchestrator | 2025-05-03 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:18.915292 | orchestrator | 2025-05-03 00:59:18 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:18.916251 | orchestrator | 2025-05-03 00:59:18 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:18.918135 | orchestrator | 2025-05-03 00:59:18 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:18.919851 | orchestrator | 2025-05-03 00:59:18 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:18.921760 | orchestrator | 2025-05-03 00:59:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:18.923268 | orchestrator | 2025-05-03 00:59:18 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:18.923727 | orchestrator | 2025-05-03 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:21.971517 | orchestrator | 2025-05-03 00:59:21 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:21.972236 | orchestrator | 2025-05-03 00:59:21 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:21.973840 | orchestrator | 2025-05-03 00:59:21 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:21.975145 | orchestrator | 2025-05-03 00:59:21 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:21.976799 | orchestrator | 2025-05-03 00:59:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:21.978290 | orchestrator | 2025-05-03 00:59:21 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:21.978602 | orchestrator | 2025-05-03 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:25.030314 | orchestrator | 2025-05-03 00:59:25 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:25.032819 | orchestrator | 2025-05-03 00:59:25 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:25.033423 | orchestrator | 2025-05-03 00:59:25 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:25.035923 | orchestrator | 2025-05-03 00:59:25 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:25.040482 | orchestrator | 2025-05-03 00:59:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:25.044218 | orchestrator | 2025-05-03 00:59:25 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:28.088477 | orchestrator | 2025-05-03 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:28.088717 | orchestrator | 2025-05-03 00:59:28 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:28.094900 | orchestrator | 2025-05-03 00:59:28 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:28.094977 | orchestrator | 2025-05-03 00:59:28 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:28.098384 | orchestrator | 2025-05-03 00:59:28 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:28.100253 | orchestrator | 2025-05-03 00:59:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:28.101882 | orchestrator | 2025-05-03 00:59:28 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:31.144499 | orchestrator | 2025-05-03 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:31.144637 | orchestrator | 2025-05-03 00:59:31 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:31.147252 | orchestrator | 2025-05-03 00:59:31 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:31.149887 | orchestrator | 2025-05-03 00:59:31 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:31.151905 | orchestrator | 2025-05-03 00:59:31 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:31.152676 | orchestrator | 2025-05-03 00:59:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:31.154228 | orchestrator | 2025-05-03 00:59:31 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:34.199244 | orchestrator | 2025-05-03 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:34.199376 | orchestrator | 2025-05-03 00:59:34 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:34.200966 | orchestrator | 2025-05-03 00:59:34 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:34.202783 | orchestrator | 2025-05-03 00:59:34 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:34.204598 | orchestrator | 2025-05-03 00:59:34 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:34.205807 | orchestrator | 2025-05-03 00:59:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:34.207262 | orchestrator | 2025-05-03 00:59:34 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:34.207382 | orchestrator | 2025-05-03 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:37.250101 | orchestrator | 2025-05-03 00:59:37 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:37.251876 | orchestrator | 2025-05-03 00:59:37 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:37.253172 | orchestrator | 2025-05-03 00:59:37 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:37.254574 | orchestrator | 2025-05-03 00:59:37 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:37.258639 | orchestrator | 2025-05-03 00:59:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:37.261484 | orchestrator | 2025-05-03 00:59:37 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:37.261814 | orchestrator | 2025-05-03 00:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:40.313965 | orchestrator | 2025-05-03 00:59:40 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:40.315810 | orchestrator | 2025-05-03 00:59:40 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:40.317253 | orchestrator | 2025-05-03 00:59:40 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:40.318314 | orchestrator | 2025-05-03 00:59:40 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:40.320659 | orchestrator | 2025-05-03 00:59:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:40.321613 | orchestrator | 2025-05-03 00:59:40 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:40.321863 | orchestrator | 2025-05-03 00:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:43.373537 | orchestrator | 2025-05-03 00:59:43 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:43.374842 | orchestrator | 2025-05-03 00:59:43 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:43.374874 | orchestrator | 2025-05-03 00:59:43 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:43.378440 | orchestrator | 2025-05-03 00:59:43 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:43.379462 | orchestrator | 2025-05-03 00:59:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:43.381915 | orchestrator | 2025-05-03 00:59:43 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:46.427137 | orchestrator | 2025-05-03 00:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:46.427265 | orchestrator | 2025-05-03 00:59:46 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:46.428061 | orchestrator | 2025-05-03 00:59:46 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:46.428097 | orchestrator | 2025-05-03 00:59:46 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:46.428735 | orchestrator | 2025-05-03 00:59:46 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:46.429341 | orchestrator | 2025-05-03 00:59:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:46.430187 | orchestrator | 2025-05-03 00:59:46 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:49.459119 | orchestrator | 2025-05-03 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:49.459242 | orchestrator | 2025-05-03 00:59:49 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:49.460042 | orchestrator | 2025-05-03 00:59:49 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:49.460087 | orchestrator | 2025-05-03 00:59:49 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:49.460706 | orchestrator | 2025-05-03 00:59:49 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:49.461544 | orchestrator | 2025-05-03 00:59:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:49.462205 | orchestrator | 2025-05-03 00:59:49 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:52.510750 | orchestrator | 2025-05-03 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:52.510874 | orchestrator | 2025-05-03 00:59:52 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:52.511620 | orchestrator | 2025-05-03 00:59:52 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:52.511670 | orchestrator | 2025-05-03 00:59:52 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:52.512106 | orchestrator | 2025-05-03 00:59:52 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:52.512701 | orchestrator | 2025-05-03 00:59:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:52.513393 | orchestrator | 2025-05-03 00:59:52 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:55.553419 | orchestrator | 2025-05-03 00:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:55.553567 | orchestrator | 2025-05-03 00:59:55 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:55.554210 | orchestrator | 2025-05-03 00:59:55 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:55.555413 | orchestrator | 2025-05-03 00:59:55 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:55.557082 | orchestrator | 2025-05-03 00:59:55 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:55.558603 | orchestrator | 2025-05-03 00:59:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:55.559710 | orchestrator | 2025-05-03 00:59:55 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:58.592453 | orchestrator | 2025-05-03 00:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 00:59:58.592562 | orchestrator | 2025-05-03 00:59:58 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 00:59:58.593128 | orchestrator | 2025-05-03 00:59:58 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 00:59:58.593166 | orchestrator | 2025-05-03 00:59:58 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 00:59:58.593602 | orchestrator | 2025-05-03 00:59:58 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 00:59:58.594203 | orchestrator | 2025-05-03 00:59:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 00:59:58.594773 | orchestrator | 2025-05-03 00:59:58 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 00:59:58.594908 | orchestrator | 2025-05-03 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:01.632043 | orchestrator | 2025-05-03 01:00:01 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:01.632803 | orchestrator | 2025-05-03 01:00:01 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:01.633166 | orchestrator | 2025-05-03 01:00:01 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:01.633899 | orchestrator | 2025-05-03 01:00:01 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:01.634564 | orchestrator | 2025-05-03 01:00:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:01.635237 | orchestrator | 2025-05-03 01:00:01 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 01:00:01.635360 | orchestrator | 2025-05-03 01:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:04.668197 | orchestrator | 2025-05-03 01:00:04 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:04.668803 | orchestrator | 2025-05-03 01:00:04 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:04.669757 | orchestrator | 2025-05-03 01:00:04 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:04.670510 | orchestrator | 2025-05-03 01:00:04 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:04.671217 | orchestrator | 2025-05-03 01:00:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:04.672046 | orchestrator | 2025-05-03 01:00:04 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 01:00:04.672292 | orchestrator | 2025-05-03 01:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:07.708560 | orchestrator | 2025-05-03 01:00:07 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:07.712698 | orchestrator | 2025-05-03 01:00:07 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:07.714771 | orchestrator | 2025-05-03 01:00:07 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:07.716800 | orchestrator | 2025-05-03 01:00:07 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:07.718124 | orchestrator | 2025-05-03 01:00:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:07.720055 | orchestrator | 2025-05-03 01:00:07 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 01:00:07.720172 | orchestrator | 2025-05-03 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:10.770097 | orchestrator | 2025-05-03 01:00:10 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:10.770441 | orchestrator | 2025-05-03 01:00:10 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:10.771805 | orchestrator | 2025-05-03 01:00:10 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:10.772773 | orchestrator | 2025-05-03 01:00:10 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:10.773323 | orchestrator | 2025-05-03 01:00:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:10.778916 | orchestrator | 2025-05-03 01:00:10 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state STARTED 2025-05-03 01:00:13.808712 | orchestrator | 2025-05-03 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:13.808831 | orchestrator | 2025-05-03 01:00:13 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:13.809178 | orchestrator | 2025-05-03 01:00:13 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:13.809839 | orchestrator | 2025-05-03 01:00:13 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:13.812745 | orchestrator | 2025-05-03 01:00:13 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:13.813267 | orchestrator | 2025-05-03 01:00:13 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:13.814162 | orchestrator | 2025-05-03 01:00:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:13.815593 | orchestrator | 2025-05-03 01:00:13.815622 | orchestrator | 2025-05-03 01:00:13.815637 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-03 01:00:13.815651 | orchestrator | 2025-05-03 01:00:13.815666 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-03 01:00:13.815680 | orchestrator | Saturday 03 May 2025 00:58:34 +0000 (0:00:00.141) 0:00:00.141 ********** 2025-05-03 01:00:13.815694 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-03 01:00:13.815730 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-03 01:00:13.815745 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-03 01:00:13.815759 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-03 01:00:13.815773 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-03 01:00:13.815799 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-03 01:00:13.815814 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-03 01:00:13.815827 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-03 01:00:13.815841 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-03 01:00:13.815855 | orchestrator | 2025-05-03 01:00:13.815869 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-03 01:00:13.815882 | orchestrator | Saturday 03 May 2025 00:58:37 +0000 (0:00:02.955) 0:00:03.097 ********** 2025-05-03 01:00:13.815899 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-03 01:00:13.815913 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-03 01:00:13.815982 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-03 01:00:13.815998 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-03 01:00:13.816012 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-03 01:00:13.816026 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-03 01:00:13.816040 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-03 01:00:13.816054 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-03 01:00:13.816068 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-03 01:00:13.816081 | orchestrator | 2025-05-03 01:00:13.816095 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-03 01:00:13.816109 | orchestrator | Saturday 03 May 2025 00:58:37 +0000 (0:00:00.260) 0:00:03.357 ********** 2025-05-03 01:00:13.816123 | orchestrator | ok: [testbed-manager] => { 2025-05-03 01:00:13.816139 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-03 01:00:13.816155 | orchestrator | } 2025-05-03 01:00:13.816170 | orchestrator | 2025-05-03 01:00:13.816183 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-03 01:00:13.816197 | orchestrator | Saturday 03 May 2025 00:58:37 +0000 (0:00:00.175) 0:00:03.532 ********** 2025-05-03 01:00:13.816213 | orchestrator | changed: [testbed-manager] 2025-05-03 01:00:13.816235 | orchestrator | 2025-05-03 01:00:13.816252 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-03 01:00:13.816267 | orchestrator | Saturday 03 May 2025 00:59:11 +0000 (0:00:33.277) 0:00:36.810 ********** 2025-05-03 01:00:13.816284 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-03 01:00:13.816301 | orchestrator | 2025-05-03 01:00:13.816317 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-03 01:00:13.816333 | orchestrator | Saturday 03 May 2025 00:59:11 +0000 (0:00:00.417) 0:00:37.227 ********** 2025-05-03 01:00:13.816349 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-03 01:00:13.816366 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-03 01:00:13.816403 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-03 01:00:13.816420 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-03 01:00:13.816436 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-03 01:00:13.816464 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-03 01:00:16.873142 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-03 01:00:16.873243 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-03 01:00:16.873262 | orchestrator | 2025-05-03 01:00:16.873278 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-03 01:00:16.873293 | orchestrator | Saturday 03 May 2025 00:59:14 +0000 (0:00:02.533) 0:00:39.761 ********** 2025-05-03 01:00:16.873307 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:00:16.873323 | orchestrator | 2025-05-03 01:00:16.873337 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:00:16.873352 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 01:00:16.873366 | orchestrator | 2025-05-03 01:00:16.873380 | orchestrator | Saturday 03 May 2025 00:59:14 +0000 (0:00:00.020) 0:00:39.782 ********** 2025-05-03 01:00:16.873394 | orchestrator | =============================================================================== 2025-05-03 01:00:16.873408 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 33.28s 2025-05-03 01:00:16.873422 | orchestrator | Check ceph keys --------------------------------------------------------- 2.96s 2025-05-03 01:00:16.873436 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.53s 2025-05-03 01:00:16.873449 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.42s 2025-05-03 01:00:16.873479 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.26s 2025-05-03 01:00:16.873494 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.18s 2025-05-03 01:00:16.873509 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.02s 2025-05-03 01:00:16.873522 | orchestrator | 2025-05-03 01:00:16.873537 | orchestrator | 2025-05-03 01:00:13 | INFO  | Task 196435db-3b7b-43ba-ab53-81892cb8b167 is in state SUCCESS 2025-05-03 01:00:16.873551 | orchestrator | 2025-05-03 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:16.873579 | orchestrator | 2025-05-03 01:00:16 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:16.874163 | orchestrator | 2025-05-03 01:00:16 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:16.874211 | orchestrator | 2025-05-03 01:00:16 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:16.876732 | orchestrator | 2025-05-03 01:00:16 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:16.878536 | orchestrator | 2025-05-03 01:00:16 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:16.880795 | orchestrator | 2025-05-03 01:00:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:16.881594 | orchestrator | 2025-05-03 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:19.915598 | orchestrator | 2025-05-03 01:00:19 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:19.915973 | orchestrator | 2025-05-03 01:00:19 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:19.916389 | orchestrator | 2025-05-03 01:00:19 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:19.917165 | orchestrator | 2025-05-03 01:00:19 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:19.917791 | orchestrator | 2025-05-03 01:00:19 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:19.919210 | orchestrator | 2025-05-03 01:00:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:22.949417 | orchestrator | 2025-05-03 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:22.949535 | orchestrator | 2025-05-03 01:00:22 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:22.950139 | orchestrator | 2025-05-03 01:00:22 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:22.950192 | orchestrator | 2025-05-03 01:00:22 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:22.951276 | orchestrator | 2025-05-03 01:00:22 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:22.951832 | orchestrator | 2025-05-03 01:00:22 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:22.953483 | orchestrator | 2025-05-03 01:00:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:25.985027 | orchestrator | 2025-05-03 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:25.985156 | orchestrator | 2025-05-03 01:00:25 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:25.985406 | orchestrator | 2025-05-03 01:00:25 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:25.986135 | orchestrator | 2025-05-03 01:00:25 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:25.986772 | orchestrator | 2025-05-03 01:00:25 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:25.987547 | orchestrator | 2025-05-03 01:00:25 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:25.988252 | orchestrator | 2025-05-03 01:00:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:25.988602 | orchestrator | 2025-05-03 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:29.021527 | orchestrator | 2025-05-03 01:00:29 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:29.022458 | orchestrator | 2025-05-03 01:00:29 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:29.023351 | orchestrator | 2025-05-03 01:00:29 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:29.024333 | orchestrator | 2025-05-03 01:00:29 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:29.025342 | orchestrator | 2025-05-03 01:00:29 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:29.026456 | orchestrator | 2025-05-03 01:00:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:32.057567 | orchestrator | 2025-05-03 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:32.057694 | orchestrator | 2025-05-03 01:00:32 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:32.058200 | orchestrator | 2025-05-03 01:00:32 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:32.058587 | orchestrator | 2025-05-03 01:00:32 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:32.059145 | orchestrator | 2025-05-03 01:00:32 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:32.059737 | orchestrator | 2025-05-03 01:00:32 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:32.060378 | orchestrator | 2025-05-03 01:00:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:35.087248 | orchestrator | 2025-05-03 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:35.087476 | orchestrator | 2025-05-03 01:00:35 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:35.090751 | orchestrator | 2025-05-03 01:00:35 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:35.090841 | orchestrator | 2025-05-03 01:00:35 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:35.090874 | orchestrator | 2025-05-03 01:00:35 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:38.128507 | orchestrator | 2025-05-03 01:00:35 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:38.128632 | orchestrator | 2025-05-03 01:00:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:38.128652 | orchestrator | 2025-05-03 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:38.128686 | orchestrator | 2025-05-03 01:00:38 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:38.130630 | orchestrator | 2025-05-03 01:00:38 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:38.132497 | orchestrator | 2025-05-03 01:00:38 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:38.134767 | orchestrator | 2025-05-03 01:00:38 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:38.135750 | orchestrator | 2025-05-03 01:00:38 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:38.171074 | orchestrator | 2025-05-03 01:00:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:41.209471 | orchestrator | 2025-05-03 01:00:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:41.209591 | orchestrator | 2025-05-03 01:00:41 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:41.209867 | orchestrator | 2025-05-03 01:00:41 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:41.210380 | orchestrator | 2025-05-03 01:00:41 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:41.213067 | orchestrator | 2025-05-03 01:00:41 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:41.214290 | orchestrator | 2025-05-03 01:00:41 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:41.215001 | orchestrator | 2025-05-03 01:00:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:44.241604 | orchestrator | 2025-05-03 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:44.241939 | orchestrator | 2025-05-03 01:00:44 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:44.242485 | orchestrator | 2025-05-03 01:00:44 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:44.242525 | orchestrator | 2025-05-03 01:00:44 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:44.242925 | orchestrator | 2025-05-03 01:00:44 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state STARTED 2025-05-03 01:00:44.243484 | orchestrator | 2025-05-03 01:00:44 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:44.244025 | orchestrator | 2025-05-03 01:00:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:47.273245 | orchestrator | 2025-05-03 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:47.273510 | orchestrator | 2025-05-03 01:00:47 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:47.274219 | orchestrator | 2025-05-03 01:00:47 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:47.274258 | orchestrator | 2025-05-03 01:00:47 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:47.274663 | orchestrator | 2025-05-03 01:00:47 | INFO  | Task 974c8f5b-b3d4-4ac2-8e2c-e0c187a7e901 is in state SUCCESS 2025-05-03 01:00:47.274943 | orchestrator | 2025-05-03 01:00:47.274973 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-03 01:00:47.274988 | orchestrator | 2025-05-03 01:00:47.275002 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-03 01:00:47.275017 | orchestrator | Saturday 03 May 2025 00:59:17 +0000 (0:00:00.163) 0:00:00.163 ********** 2025-05-03 01:00:47.275031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-03 01:00:47.275060 | orchestrator | 2025-05-03 01:00:47.275075 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-03 01:00:47.275089 | orchestrator | Saturday 03 May 2025 00:59:17 +0000 (0:00:00.211) 0:00:00.374 ********** 2025-05-03 01:00:47.275104 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-03 01:00:47.275118 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-03 01:00:47.275132 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-03 01:00:47.275147 | orchestrator | 2025-05-03 01:00:47.275161 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-03 01:00:47.275175 | orchestrator | Saturday 03 May 2025 00:59:18 +0000 (0:00:01.229) 0:00:01.604 ********** 2025-05-03 01:00:47.275189 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-03 01:00:47.275287 | orchestrator | 2025-05-03 01:00:47.275306 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-03 01:00:47.275321 | orchestrator | Saturday 03 May 2025 00:59:19 +0000 (0:00:01.155) 0:00:02.760 ********** 2025-05-03 01:00:47.275335 | orchestrator | changed: [testbed-manager] 2025-05-03 01:00:47.275357 | orchestrator | 2025-05-03 01:00:47.275372 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-03 01:00:47.275386 | orchestrator | Saturday 03 May 2025 00:59:20 +0000 (0:00:00.881) 0:00:03.642 ********** 2025-05-03 01:00:47.275400 | orchestrator | changed: [testbed-manager] 2025-05-03 01:00:47.275434 | orchestrator | 2025-05-03 01:00:47.275449 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-03 01:00:47.275463 | orchestrator | Saturday 03 May 2025 00:59:21 +0000 (0:00:00.984) 0:00:04.627 ********** 2025-05-03 01:00:47.275477 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-03 01:00:47.275491 | orchestrator | ok: [testbed-manager] 2025-05-03 01:00:47.275506 | orchestrator | 2025-05-03 01:00:47.275520 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-03 01:00:47.275534 | orchestrator | Saturday 03 May 2025 01:00:02 +0000 (0:00:40.435) 0:00:45.062 ********** 2025-05-03 01:00:47.275548 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-03 01:00:47.275562 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-03 01:00:47.275577 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-03 01:00:47.275591 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-03 01:00:47.275605 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-03 01:00:47.275619 | orchestrator | 2025-05-03 01:00:47.275633 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-03 01:00:47.275647 | orchestrator | Saturday 03 May 2025 01:00:05 +0000 (0:00:03.573) 0:00:48.636 ********** 2025-05-03 01:00:47.275661 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-03 01:00:47.275675 | orchestrator | 2025-05-03 01:00:47.275689 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-03 01:00:47.275703 | orchestrator | Saturday 03 May 2025 01:00:06 +0000 (0:00:00.403) 0:00:49.039 ********** 2025-05-03 01:00:47.275717 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:00:47.275736 | orchestrator | 2025-05-03 01:00:47.275750 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-03 01:00:47.275764 | orchestrator | Saturday 03 May 2025 01:00:06 +0000 (0:00:00.108) 0:00:49.147 ********** 2025-05-03 01:00:47.275778 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:00:47.275793 | orchestrator | 2025-05-03 01:00:47.275806 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-03 01:00:47.275820 | orchestrator | Saturday 03 May 2025 01:00:06 +0000 (0:00:00.276) 0:00:49.424 ********** 2025-05-03 01:00:47.275981 | orchestrator | changed: [testbed-manager] 2025-05-03 01:00:47.276001 | orchestrator | 2025-05-03 01:00:47.276015 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-03 01:00:47.276029 | orchestrator | Saturday 03 May 2025 01:00:08 +0000 (0:00:01.476) 0:00:50.900 ********** 2025-05-03 01:00:47.276044 | orchestrator | changed: [testbed-manager] 2025-05-03 01:00:47.276058 | orchestrator | 2025-05-03 01:00:47.276072 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-03 01:00:47.276086 | orchestrator | Saturday 03 May 2025 01:00:09 +0000 (0:00:01.128) 0:00:52.029 ********** 2025-05-03 01:00:47.276100 | orchestrator | changed: [testbed-manager] 2025-05-03 01:00:47.276113 | orchestrator | 2025-05-03 01:00:47.276127 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-03 01:00:47.276141 | orchestrator | Saturday 03 May 2025 01:00:09 +0000 (0:00:00.533) 0:00:52.562 ********** 2025-05-03 01:00:47.276155 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-03 01:00:47.276176 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-03 01:00:47.276190 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-03 01:00:47.276204 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-03 01:00:47.276218 | orchestrator | 2025-05-03 01:00:47.276232 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:00:47.276246 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-03 01:00:47.276261 | orchestrator | 2025-05-03 01:00:47.276286 | orchestrator | Saturday 03 May 2025 01:00:10 +0000 (0:00:01.274) 0:00:53.837 ********** 2025-05-03 01:00:50.314430 | orchestrator | =============================================================================== 2025-05-03 01:00:50.314658 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.44s 2025-05-03 01:00:50.314687 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.57s 2025-05-03 01:00:50.314703 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.48s 2025-05-03 01:00:50.314717 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.27s 2025-05-03 01:00:50.314731 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.23s 2025-05-03 01:00:50.314745 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.16s 2025-05-03 01:00:50.314759 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.13s 2025-05-03 01:00:50.314773 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2025-05-03 01:00:50.314788 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2025-05-03 01:00:50.314802 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2025-05-03 01:00:50.314816 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.40s 2025-05-03 01:00:50.314830 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-05-03 01:00:50.314844 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-05-03 01:00:50.314858 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2025-05-03 01:00:50.315007 | orchestrator | 2025-05-03 01:00:50.315054 | orchestrator | 2025-05-03 01:00:47 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:50.315070 | orchestrator | 2025-05-03 01:00:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:50.315084 | orchestrator | 2025-05-03 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:50.315114 | orchestrator | 2025-05-03 01:00:50 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:50.315441 | orchestrator | 2025-05-03 01:00:50 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:50.315476 | orchestrator | 2025-05-03 01:00:50 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:50.316281 | orchestrator | 2025-05-03 01:00:50 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:50.316564 | orchestrator | 2025-05-03 01:00:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:50.318078 | orchestrator | 2025-05-03 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:53.358693 | orchestrator | 2025-05-03 01:00:53 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:53.358998 | orchestrator | 2025-05-03 01:00:53 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:53.360281 | orchestrator | 2025-05-03 01:00:53 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:53.361042 | orchestrator | 2025-05-03 01:00:53 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:53.361077 | orchestrator | 2025-05-03 01:00:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:56.388842 | orchestrator | 2025-05-03 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:56.389018 | orchestrator | 2025-05-03 01:00:56 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:56.389260 | orchestrator | 2025-05-03 01:00:56 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:56.389756 | orchestrator | 2025-05-03 01:00:56 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:56.392435 | orchestrator | 2025-05-03 01:00:56 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:59.421388 | orchestrator | 2025-05-03 01:00:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:00:59.421490 | orchestrator | 2025-05-03 01:00:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:00:59.421525 | orchestrator | 2025-05-03 01:00:59 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:00:59.428605 | orchestrator | 2025-05-03 01:00:59 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:00:59.428665 | orchestrator | 2025-05-03 01:00:59 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:00:59.428854 | orchestrator | 2025-05-03 01:00:59 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:00:59.429632 | orchestrator | 2025-05-03 01:00:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:02.462279 | orchestrator | 2025-05-03 01:00:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:02.462502 | orchestrator | 2025-05-03 01:01:02 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:02.464010 | orchestrator | 2025-05-03 01:01:02 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:01:02.464048 | orchestrator | 2025-05-03 01:01:02 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:02.464502 | orchestrator | 2025-05-03 01:01:02 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:02.465117 | orchestrator | 2025-05-03 01:01:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:05.500011 | orchestrator | 2025-05-03 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:05.500126 | orchestrator | 2025-05-03 01:01:05 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:05.502754 | orchestrator | 2025-05-03 01:01:05 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:01:05.504504 | orchestrator | 2025-05-03 01:01:05 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:05.506570 | orchestrator | 2025-05-03 01:01:05 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:05.508259 | orchestrator | 2025-05-03 01:01:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:08.539035 | orchestrator | 2025-05-03 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:08.539178 | orchestrator | 2025-05-03 01:01:08 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:08.540343 | orchestrator | 2025-05-03 01:01:08 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state STARTED 2025-05-03 01:01:08.540826 | orchestrator | 2025-05-03 01:01:08 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:08.540883 | orchestrator | 2025-05-03 01:01:08 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:08.541485 | orchestrator | 2025-05-03 01:01:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:11.567269 | orchestrator | 2025-05-03 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:11.567442 | orchestrator | 2025-05-03 01:01:11 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:11.567973 | orchestrator | 2025-05-03 01:01:11 | INFO  | Task d5809cf1-0bec-4c55-ba1d-2b0fc0b21b71 is in state SUCCESS 2025-05-03 01:01:11.569547 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-03 01:01:11.569603 | orchestrator | 2025-05-03 01:01:11.570126 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-03 01:01:11.570155 | orchestrator | 2025-05-03 01:01:11.570170 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-03 01:01:11.570184 | orchestrator | Saturday 03 May 2025 01:00:13 +0000 (0:00:00.321) 0:00:00.321 ********** 2025-05-03 01:01:11.570198 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570214 | orchestrator | 2025-05-03 01:01:11.570228 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-03 01:01:11.570242 | orchestrator | Saturday 03 May 2025 01:00:15 +0000 (0:00:01.827) 0:00:02.148 ********** 2025-05-03 01:01:11.570256 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570270 | orchestrator | 2025-05-03 01:01:11.570284 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-03 01:01:11.570298 | orchestrator | Saturday 03 May 2025 01:00:16 +0000 (0:00:00.868) 0:00:03.016 ********** 2025-05-03 01:01:11.570312 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570326 | orchestrator | 2025-05-03 01:01:11.570340 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-03 01:01:11.570353 | orchestrator | Saturday 03 May 2025 01:00:17 +0000 (0:00:00.883) 0:00:03.900 ********** 2025-05-03 01:01:11.570367 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570381 | orchestrator | 2025-05-03 01:01:11.570395 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-03 01:01:11.570408 | orchestrator | Saturday 03 May 2025 01:00:18 +0000 (0:00:00.968) 0:00:04.869 ********** 2025-05-03 01:01:11.570422 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570436 | orchestrator | 2025-05-03 01:01:11.570450 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-03 01:01:11.570472 | orchestrator | Saturday 03 May 2025 01:00:19 +0000 (0:00:00.795) 0:00:05.664 ********** 2025-05-03 01:01:11.570486 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570500 | orchestrator | 2025-05-03 01:01:11.570514 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-03 01:01:11.570528 | orchestrator | Saturday 03 May 2025 01:00:20 +0000 (0:00:01.002) 0:00:06.667 ********** 2025-05-03 01:01:11.570542 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570556 | orchestrator | 2025-05-03 01:01:11.570569 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-03 01:01:11.570583 | orchestrator | Saturday 03 May 2025 01:00:21 +0000 (0:00:01.053) 0:00:07.720 ********** 2025-05-03 01:01:11.570597 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570611 | orchestrator | 2025-05-03 01:01:11.570625 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-03 01:01:11.570638 | orchestrator | Saturday 03 May 2025 01:00:22 +0000 (0:00:01.118) 0:00:08.839 ********** 2025-05-03 01:01:11.570652 | orchestrator | changed: [testbed-manager] 2025-05-03 01:01:11.570666 | orchestrator | 2025-05-03 01:01:11.570680 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-03 01:01:11.570694 | orchestrator | Saturday 03 May 2025 01:00:39 +0000 (0:00:17.097) 0:00:25.937 ********** 2025-05-03 01:01:11.570708 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:01:11.570721 | orchestrator | 2025-05-03 01:01:11.570735 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-03 01:01:11.570749 | orchestrator | 2025-05-03 01:01:11.570767 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-03 01:01:11.570783 | orchestrator | Saturday 03 May 2025 01:00:40 +0000 (0:00:00.717) 0:00:26.655 ********** 2025-05-03 01:01:11.570799 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:01:11.570830 | orchestrator | 2025-05-03 01:01:11.570872 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-03 01:01:11.570889 | orchestrator | 2025-05-03 01:01:11.570903 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-03 01:01:11.570917 | orchestrator | Saturday 03 May 2025 01:00:42 +0000 (0:00:02.001) 0:00:28.656 ********** 2025-05-03 01:01:11.570930 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:01:11.570944 | orchestrator | 2025-05-03 01:01:11.570958 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-03 01:01:11.570972 | orchestrator | 2025-05-03 01:01:11.570986 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-03 01:01:11.571000 | orchestrator | Saturday 03 May 2025 01:00:44 +0000 (0:00:01.799) 0:00:30.456 ********** 2025-05-03 01:01:11.571013 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:01:11.571027 | orchestrator | 2025-05-03 01:01:11.571041 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:01:11.571112 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-03 01:01:11.571129 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:01:11.571144 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:01:11.571157 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:01:11.571171 | orchestrator | 2025-05-03 01:01:11.571185 | orchestrator | 2025-05-03 01:01:11.571199 | orchestrator | 2025-05-03 01:01:11.571213 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:01:11.571227 | orchestrator | Saturday 03 May 2025 01:00:45 +0000 (0:00:01.483) 0:00:31.940 ********** 2025-05-03 01:01:11.571240 | orchestrator | =============================================================================== 2025-05-03 01:01:11.571254 | orchestrator | Create admin user ------------------------------------------------------ 17.10s 2025-05-03 01:01:11.571316 | orchestrator | Restart ceph manager service -------------------------------------------- 5.29s 2025-05-03 01:01:11.571333 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.83s 2025-05-03 01:01:11.571347 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.12s 2025-05-03 01:01:11.571361 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.05s 2025-05-03 01:01:11.571375 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.00s 2025-05-03 01:01:11.571389 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.97s 2025-05-03 01:01:11.571403 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.88s 2025-05-03 01:01:11.571416 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.87s 2025-05-03 01:01:11.571430 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.80s 2025-05-03 01:01:11.571450 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.72s 2025-05-03 01:01:11.571464 | orchestrator | 2025-05-03 01:01:11.571478 | orchestrator | 2025-05-03 01:01:11.571491 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:01:11.571505 | orchestrator | 2025-05-03 01:01:11.571519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:01:11.571533 | orchestrator | Saturday 03 May 2025 00:59:08 +0000 (0:00:00.218) 0:00:00.218 ********** 2025-05-03 01:01:11.571547 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:01:11.571561 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:01:11.571575 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:01:11.571588 | orchestrator | 2025-05-03 01:01:11.571602 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:01:11.571625 | orchestrator | Saturday 03 May 2025 00:59:09 +0000 (0:00:00.319) 0:00:00.537 ********** 2025-05-03 01:01:11.571639 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-03 01:01:11.571653 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-03 01:01:11.571667 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-03 01:01:11.571681 | orchestrator | 2025-05-03 01:01:11.571695 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-03 01:01:11.571709 | orchestrator | 2025-05-03 01:01:11.571722 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-03 01:01:11.571736 | orchestrator | Saturday 03 May 2025 00:59:09 +0000 (0:00:00.410) 0:00:00.948 ********** 2025-05-03 01:01:11.571750 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:01:11.571765 | orchestrator | 2025-05-03 01:01:11.571778 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-03 01:01:11.571792 | orchestrator | Saturday 03 May 2025 00:59:10 +0000 (0:00:00.697) 0:00:01.645 ********** 2025-05-03 01:01:11.571806 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-03 01:01:11.571911 | orchestrator | 2025-05-03 01:01:11.571929 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-03 01:01:11.571943 | orchestrator | Saturday 03 May 2025 00:59:13 +0000 (0:00:03.367) 0:00:05.013 ********** 2025-05-03 01:01:11.571957 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-03 01:01:11.571971 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-03 01:01:11.571985 | orchestrator | 2025-05-03 01:01:11.571999 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-03 01:01:11.572013 | orchestrator | Saturday 03 May 2025 00:59:20 +0000 (0:00:06.566) 0:00:11.580 ********** 2025-05-03 01:01:11.572027 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-03 01:01:11.572041 | orchestrator | 2025-05-03 01:01:11.572055 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-03 01:01:11.572069 | orchestrator | Saturday 03 May 2025 00:59:23 +0000 (0:00:03.343) 0:00:14.923 ********** 2025-05-03 01:01:11.572083 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:01:11.572097 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-03 01:01:11.572111 | orchestrator | 2025-05-03 01:01:11.572125 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-03 01:01:11.572138 | orchestrator | Saturday 03 May 2025 00:59:27 +0000 (0:00:04.058) 0:00:18.982 ********** 2025-05-03 01:01:11.572152 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:01:11.572166 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-03 01:01:11.572181 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-03 01:01:11.572193 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-03 01:01:11.572206 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-03 01:01:11.572218 | orchestrator | 2025-05-03 01:01:11.572230 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-03 01:01:11.572243 | orchestrator | Saturday 03 May 2025 00:59:42 +0000 (0:00:15.269) 0:00:34.252 ********** 2025-05-03 01:01:11.572255 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-03 01:01:11.572267 | orchestrator | 2025-05-03 01:01:11.572280 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-03 01:01:11.572292 | orchestrator | Saturday 03 May 2025 00:59:46 +0000 (0:00:04.039) 0:00:38.291 ********** 2025-05-03 01:01:11.572315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.572342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.572357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.572371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572466 | orchestrator | 2025-05-03 01:01:11.572479 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-03 01:01:11.572492 | orchestrator | Saturday 03 May 2025 00:59:48 +0000 (0:00:01.925) 0:00:40.216 ********** 2025-05-03 01:01:11.572505 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-03 01:01:11.572517 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-03 01:01:11.572530 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-03 01:01:11.572542 | orchestrator | 2025-05-03 01:01:11.572555 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-03 01:01:11.572567 | orchestrator | Saturday 03 May 2025 00:59:50 +0000 (0:00:01.916) 0:00:42.132 ********** 2025-05-03 01:01:11.572580 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:01:11.572592 | orchestrator | 2025-05-03 01:01:11.572605 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-03 01:01:11.572617 | orchestrator | Saturday 03 May 2025 00:59:50 +0000 (0:00:00.258) 0:00:42.391 ********** 2025-05-03 01:01:11.572630 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:01:11.572648 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:01:11.572661 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:01:11.572673 | orchestrator | 2025-05-03 01:01:11.572686 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-03 01:01:11.572698 | orchestrator | Saturday 03 May 2025 00:59:51 +0000 (0:00:00.459) 0:00:42.850 ********** 2025-05-03 01:01:11.572710 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:01:11.572723 | orchestrator | 2025-05-03 01:01:11.572735 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-03 01:01:11.572752 | orchestrator | Saturday 03 May 2025 00:59:52 +0000 (0:00:00.599) 0:00:43.450 ********** 2025-05-03 01:01:11.572773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.572788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.572802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.572816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.572934 | orchestrator | 2025-05-03 01:01:11.572947 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-03 01:01:11.572959 | orchestrator | Saturday 03 May 2025 00:59:56 +0000 (0:00:04.609) 0:00:48.059 ********** 2025-05-03 01:01:11.572979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.572999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573026 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:01:11.573039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.573053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573086 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:01:11.573105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.573119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573146 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:01:11.573159 | orchestrator | 2025-05-03 01:01:11.573172 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-03 01:01:11.573184 | orchestrator | Saturday 03 May 2025 00:59:57 +0000 (0:00:01.048) 0:00:49.108 ********** 2025-05-03 01:01:11.573197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.573220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573251 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:01:11.573265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.573279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.573293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573343 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:01:11.573362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.573375 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:01:11.573388 | orchestrator | 2025-05-03 01:01:11.573401 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-03 01:01:11.573414 | orchestrator | Saturday 03 May 2025 00:59:58 +0000 (0:00:01.184) 0:00:50.293 ********** 2025-05-03 01:01:11.573427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.573446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.573459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.573479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.573494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.573521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.573540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.573554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.573567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.573579 | orchestrator | 2025-05-03 01:01:11.573592 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-03 01:01:11.573605 | orchestrator | Saturday 03 May 2025 01:00:03 +0000 (0:00:05.008) 0:00:55.301 ********** 2025-05-03 01:01:11.573617 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:01:11.573630 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:01:11.573642 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:01:11.573655 | orchestrator | 2025-05-03 01:01:11.573667 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-03 01:01:11.573680 | orchestrator | Saturday 03 May 2025 01:00:07 +0000 (0:00:04.000) 0:00:59.302 ********** 2025-05-03 01:01:11.573697 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 01:01:11.573710 | orchestrator | 2025-05-03 01:01:11.573723 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-03 01:01:11.573735 | orchestrator | Saturday 03 May 2025 01:00:11 +0000 (0:00:03.294) 0:01:02.596 ********** 2025-05-03 01:01:11.573748 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:01:11.573760 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:01:11.573779 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:01:11.573803 | orchestrator | 2025-05-03 01:01:11.573825 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-03 01:01:11.573970 | orchestrator | Saturday 03 May 2025 01:00:12 +0000 (0:00:01.500) 0:01:04.097 ********** 2025-05-03 01:01:11.574056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.574105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.574124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.574138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574237 | orchestrator | 2025-05-03 01:01:11.574250 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-03 01:01:11.574262 | orchestrator | Saturday 03 May 2025 01:00:23 +0000 (0:00:10.686) 0:01:14.783 ********** 2025-05-03 01:01:11.574281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.574295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.574315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.574328 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:01:11.574341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.574355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.574368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.574380 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:01:11.574399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-03 01:01:11.574418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.574429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:01:11.574440 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:01:11.574450 | orchestrator | 2025-05-03 01:01:11.574460 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-03 01:01:11.574471 | orchestrator | Saturday 03 May 2025 01:00:24 +0000 (0:00:01.626) 0:01:16.410 ********** 2025-05-03 01:01:11.574481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.574497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.574514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-03 01:01:11.574526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:01:11.574621 | orchestrator | 2025-05-03 01:01:11.574632 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-03 01:01:11.574642 | orchestrator | Saturday 03 May 2025 01:00:28 +0000 (0:00:03.501) 0:01:19.911 ********** 2025-05-03 01:01:11.574652 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:01:11.574663 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:01:11.574673 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:01:11.574683 | orchestrator | 2025-05-03 01:01:11.574693 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-03 01:01:11.574703 | orchestrator | Saturday 03 May 2025 01:00:29 +0000 (0:00:00.558) 0:01:20.470 ********** 2025-05-03 01:01:11.574713 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:01:11.574724 | orchestrator | 2025-05-03 01:01:11.574734 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-03 01:01:11.574744 | orchestrator | Saturday 03 May 2025 01:00:31 +0000 (0:00:02.895) 0:01:23.366 ********** 2025-05-03 01:01:11.574753 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:01:11.574767 | orchestrator | 2025-05-03 01:01:11.574777 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-03 01:01:11.574787 | orchestrator | Saturday 03 May 2025 01:00:34 +0000 (0:00:02.287) 0:01:25.653 ********** 2025-05-03 01:01:11.574797 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:01:11.574807 | orchestrator | 2025-05-03 01:01:11.574817 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-03 01:01:11.574827 | orchestrator | Saturday 03 May 2025 01:00:45 +0000 (0:00:11.333) 0:01:36.987 ********** 2025-05-03 01:01:11.574837 | orchestrator | 2025-05-03 01:01:11.574873 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-03 01:01:11.574885 | orchestrator | Saturday 03 May 2025 01:00:45 +0000 (0:00:00.089) 0:01:37.076 ********** 2025-05-03 01:01:11.574895 | orchestrator | 2025-05-03 01:01:11.574905 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-03 01:01:11.574919 | orchestrator | Saturday 03 May 2025 01:00:45 +0000 (0:00:00.232) 0:01:37.308 ********** 2025-05-03 01:01:11.574929 | orchestrator | 2025-05-03 01:01:11.574940 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-03 01:01:11.574950 | orchestrator | Saturday 03 May 2025 01:00:45 +0000 (0:00:00.042) 0:01:37.351 ********** 2025-05-03 01:01:11.574960 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:01:11.574970 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:01:11.574980 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:01:11.574990 | orchestrator | 2025-05-03 01:01:11.575000 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-03 01:01:11.575010 | orchestrator | Saturday 03 May 2025 01:00:52 +0000 (0:00:06.349) 0:01:43.700 ********** 2025-05-03 01:01:11.575025 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:01:11.575038 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:01:11.575058 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:01:11.575085 | orchestrator | 2025-05-03 01:01:11.575102 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-03 01:01:11.575119 | orchestrator | Saturday 03 May 2025 01:00:58 +0000 (0:00:06.242) 0:01:49.943 ********** 2025-05-03 01:01:11.575137 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:01:11.575154 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:01:11.575165 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:01:11.575175 | orchestrator | 2025-05-03 01:01:11.575185 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:01:11.575196 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:01:11.575206 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 01:01:11.575217 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 01:01:11.575227 | orchestrator | 2025-05-03 01:01:11.575237 | orchestrator | 2025-05-03 01:01:11.575253 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:01:11.575276 | orchestrator | Saturday 03 May 2025 01:01:10 +0000 (0:00:11.955) 0:02:01.898 ********** 2025-05-03 01:01:14.600288 | orchestrator | =============================================================================== 2025-05-03 01:01:14.601262 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.27s 2025-05-03 01:01:14.601348 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.96s 2025-05-03 01:01:14.601380 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.33s 2025-05-03 01:01:14.601409 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.69s 2025-05-03 01:01:14.601437 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.57s 2025-05-03 01:01:14.601465 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.35s 2025-05-03 01:01:14.601493 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.24s 2025-05-03 01:01:14.601520 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.01s 2025-05-03 01:01:14.601550 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.61s 2025-05-03 01:01:14.601578 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.06s 2025-05-03 01:01:14.601606 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.04s 2025-05-03 01:01:14.601633 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.00s 2025-05-03 01:01:14.601661 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.50s 2025-05-03 01:01:14.601689 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.37s 2025-05-03 01:01:14.601716 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.34s 2025-05-03 01:01:14.601744 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 3.29s 2025-05-03 01:01:14.601772 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.90s 2025-05-03 01:01:14.601800 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.29s 2025-05-03 01:01:14.601827 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.93s 2025-05-03 01:01:14.601879 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.92s 2025-05-03 01:01:14.601907 | orchestrator | 2025-05-03 01:01:11 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:14.602306 | orchestrator | 2025-05-03 01:01:11 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:14.603398 | orchestrator | 2025-05-03 01:01:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:14.603435 | orchestrator | 2025-05-03 01:01:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:14.603497 | orchestrator | 2025-05-03 01:01:14 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:17.636181 | orchestrator | 2025-05-03 01:01:14 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:17.636286 | orchestrator | 2025-05-03 01:01:14 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:17.636305 | orchestrator | 2025-05-03 01:01:14 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:17.636320 | orchestrator | 2025-05-03 01:01:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:17.636335 | orchestrator | 2025-05-03 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:17.636365 | orchestrator | 2025-05-03 01:01:17 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:17.637031 | orchestrator | 2025-05-03 01:01:17 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:17.637064 | orchestrator | 2025-05-03 01:01:17 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:17.637205 | orchestrator | 2025-05-03 01:01:17 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:17.640082 | orchestrator | 2025-05-03 01:01:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:20.666687 | orchestrator | 2025-05-03 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:20.667403 | orchestrator | 2025-05-03 01:01:20 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:20.667930 | orchestrator | 2025-05-03 01:01:20 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:20.667961 | orchestrator | 2025-05-03 01:01:20 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:20.667987 | orchestrator | 2025-05-03 01:01:20 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:20.668281 | orchestrator | 2025-05-03 01:01:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:23.707673 | orchestrator | 2025-05-03 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:23.707891 | orchestrator | 2025-05-03 01:01:23 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:23.708373 | orchestrator | 2025-05-03 01:01:23 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:23.709082 | orchestrator | 2025-05-03 01:01:23 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:23.709115 | orchestrator | 2025-05-03 01:01:23 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:23.709703 | orchestrator | 2025-05-03 01:01:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:26.734811 | orchestrator | 2025-05-03 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:26.734970 | orchestrator | 2025-05-03 01:01:26 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:26.735509 | orchestrator | 2025-05-03 01:01:26 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:26.735547 | orchestrator | 2025-05-03 01:01:26 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:26.736009 | orchestrator | 2025-05-03 01:01:26 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:26.737961 | orchestrator | 2025-05-03 01:01:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:29.773011 | orchestrator | 2025-05-03 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:29.773227 | orchestrator | 2025-05-03 01:01:29 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:29.776778 | orchestrator | 2025-05-03 01:01:29 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:29.776848 | orchestrator | 2025-05-03 01:01:29 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:32.819233 | orchestrator | 2025-05-03 01:01:29 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:32.819336 | orchestrator | 2025-05-03 01:01:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:32.819356 | orchestrator | 2025-05-03 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:32.819388 | orchestrator | 2025-05-03 01:01:32 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:32.820062 | orchestrator | 2025-05-03 01:01:32 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:32.821431 | orchestrator | 2025-05-03 01:01:32 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:32.822619 | orchestrator | 2025-05-03 01:01:32 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:32.824357 | orchestrator | 2025-05-03 01:01:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:35.854856 | orchestrator | 2025-05-03 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:35.854974 | orchestrator | 2025-05-03 01:01:35 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:35.855214 | orchestrator | 2025-05-03 01:01:35 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:35.855808 | orchestrator | 2025-05-03 01:01:35 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:35.856128 | orchestrator | 2025-05-03 01:01:35 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:35.856966 | orchestrator | 2025-05-03 01:01:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:38.890627 | orchestrator | 2025-05-03 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:38.890750 | orchestrator | 2025-05-03 01:01:38 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:38.893345 | orchestrator | 2025-05-03 01:01:38 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:38.895220 | orchestrator | 2025-05-03 01:01:38 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:38.897151 | orchestrator | 2025-05-03 01:01:38 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:38.898378 | orchestrator | 2025-05-03 01:01:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:38.899223 | orchestrator | 2025-05-03 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:41.940622 | orchestrator | 2025-05-03 01:01:41 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:41.941053 | orchestrator | 2025-05-03 01:01:41 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:41.941904 | orchestrator | 2025-05-03 01:01:41 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:41.942557 | orchestrator | 2025-05-03 01:01:41 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:41.943258 | orchestrator | 2025-05-03 01:01:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:44.972291 | orchestrator | 2025-05-03 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:44.972449 | orchestrator | 2025-05-03 01:01:44 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:44.972641 | orchestrator | 2025-05-03 01:01:44 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:44.972689 | orchestrator | 2025-05-03 01:01:44 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:44.973482 | orchestrator | 2025-05-03 01:01:44 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:44.974199 | orchestrator | 2025-05-03 01:01:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:48.011976 | orchestrator | 2025-05-03 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:48.012110 | orchestrator | 2025-05-03 01:01:48 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:48.012298 | orchestrator | 2025-05-03 01:01:48 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:48.015520 | orchestrator | 2025-05-03 01:01:48 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:51.049061 | orchestrator | 2025-05-03 01:01:48 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:51.049173 | orchestrator | 2025-05-03 01:01:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:51.049193 | orchestrator | 2025-05-03 01:01:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:51.049224 | orchestrator | 2025-05-03 01:01:51 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:51.050247 | orchestrator | 2025-05-03 01:01:51 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:51.050790 | orchestrator | 2025-05-03 01:01:51 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:51.052490 | orchestrator | 2025-05-03 01:01:51 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:51.053651 | orchestrator | 2025-05-03 01:01:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:54.094495 | orchestrator | 2025-05-03 01:01:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:54.094633 | orchestrator | 2025-05-03 01:01:54 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:54.096101 | orchestrator | 2025-05-03 01:01:54 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:54.098357 | orchestrator | 2025-05-03 01:01:54 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:54.100284 | orchestrator | 2025-05-03 01:01:54 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:54.102075 | orchestrator | 2025-05-03 01:01:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:57.158111 | orchestrator | 2025-05-03 01:01:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:01:57.158291 | orchestrator | 2025-05-03 01:01:57 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:01:57.159825 | orchestrator | 2025-05-03 01:01:57 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:01:57.159886 | orchestrator | 2025-05-03 01:01:57 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:01:57.161735 | orchestrator | 2025-05-03 01:01:57 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:01:57.163554 | orchestrator | 2025-05-03 01:01:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:01:57.163666 | orchestrator | 2025-05-03 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:00.215624 | orchestrator | 2025-05-03 01:02:00 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:00.216562 | orchestrator | 2025-05-03 01:02:00 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:02:00.218526 | orchestrator | 2025-05-03 01:02:00 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:02:00.222899 | orchestrator | 2025-05-03 01:02:00 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:00.224177 | orchestrator | 2025-05-03 01:02:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:00.224261 | orchestrator | 2025-05-03 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:03.271920 | orchestrator | 2025-05-03 01:02:03 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:03.273040 | orchestrator | 2025-05-03 01:02:03 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:02:03.275669 | orchestrator | 2025-05-03 01:02:03 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:02:03.277575 | orchestrator | 2025-05-03 01:02:03 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:03.279521 | orchestrator | 2025-05-03 01:02:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:06.326716 | orchestrator | 2025-05-03 01:02:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:06.326905 | orchestrator | 2025-05-03 01:02:06 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:06.328052 | orchestrator | 2025-05-03 01:02:06 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:02:06.329935 | orchestrator | 2025-05-03 01:02:06 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:02:06.331835 | orchestrator | 2025-05-03 01:02:06 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:06.332850 | orchestrator | 2025-05-03 01:02:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:09.385465 | orchestrator | 2025-05-03 01:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:09.385610 | orchestrator | 2025-05-03 01:02:09 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:09.386705 | orchestrator | 2025-05-03 01:02:09 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:02:09.388526 | orchestrator | 2025-05-03 01:02:09 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:02:09.389888 | orchestrator | 2025-05-03 01:02:09 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:09.391484 | orchestrator | 2025-05-03 01:02:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:09.391531 | orchestrator | 2025-05-03 01:02:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:12.434468 | orchestrator | 2025-05-03 01:02:12 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:12.436716 | orchestrator | 2025-05-03 01:02:12 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:02:12.440027 | orchestrator | 2025-05-03 01:02:12 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state STARTED 2025-05-03 01:02:12.440661 | orchestrator | 2025-05-03 01:02:12 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:12.442221 | orchestrator | 2025-05-03 01:02:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:12.442662 | orchestrator | 2025-05-03 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:15.502394 | orchestrator | 2025-05-03 01:02:15 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:15.503703 | orchestrator | 2025-05-03 01:02:15 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:02:15.504187 | orchestrator | 2025-05-03 01:02:15 | INFO  | Task 9b87bf99-4516-493b-8c6a-269a4f9c6073 is in state SUCCESS 2025-05-03 01:02:15.506277 | orchestrator | 2025-05-03 01:02:15.506372 | orchestrator | 2025-05-03 01:02:15.506393 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:02:15.506409 | orchestrator | 2025-05-03 01:02:15.506424 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:02:15.506439 | orchestrator | Saturday 03 May 2025 00:59:09 +0000 (0:00:00.320) 0:00:00.320 ********** 2025-05-03 01:02:15.506453 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:02:15.506468 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:02:15.506482 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:02:15.506497 | orchestrator | 2025-05-03 01:02:15.506511 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:02:15.506525 | orchestrator | Saturday 03 May 2025 00:59:09 +0000 (0:00:00.346) 0:00:00.667 ********** 2025-05-03 01:02:15.506540 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-03 01:02:15.506555 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-03 01:02:15.506569 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-03 01:02:15.506583 | orchestrator | 2025-05-03 01:02:15.506597 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-03 01:02:15.506611 | orchestrator | 2025-05-03 01:02:15.506625 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-03 01:02:15.506639 | orchestrator | Saturday 03 May 2025 00:59:09 +0000 (0:00:00.242) 0:00:00.909 ********** 2025-05-03 01:02:15.506653 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:02:15.506668 | orchestrator | 2025-05-03 01:02:15.506682 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-03 01:02:15.506696 | orchestrator | Saturday 03 May 2025 00:59:10 +0000 (0:00:00.573) 0:00:01.482 ********** 2025-05-03 01:02:15.506710 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-03 01:02:15.506724 | orchestrator | 2025-05-03 01:02:15.506738 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-03 01:02:15.506752 | orchestrator | Saturday 03 May 2025 00:59:14 +0000 (0:00:03.589) 0:00:05.072 ********** 2025-05-03 01:02:15.506836 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-03 01:02:15.506854 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-03 01:02:15.506869 | orchestrator | 2025-05-03 01:02:15.506883 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-03 01:02:15.506913 | orchestrator | Saturday 03 May 2025 00:59:20 +0000 (0:00:06.295) 0:00:11.368 ********** 2025-05-03 01:02:15.506928 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:02:15.506942 | orchestrator | 2025-05-03 01:02:15.506956 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-03 01:02:15.506970 | orchestrator | Saturday 03 May 2025 00:59:23 +0000 (0:00:03.424) 0:00:14.792 ********** 2025-05-03 01:02:15.506984 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:02:15.506998 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-03 01:02:15.507012 | orchestrator | 2025-05-03 01:02:15.507026 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-03 01:02:15.507040 | orchestrator | Saturday 03 May 2025 00:59:27 +0000 (0:00:03.797) 0:00:18.590 ********** 2025-05-03 01:02:15.507054 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:02:15.507068 | orchestrator | 2025-05-03 01:02:15.507082 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-03 01:02:15.507096 | orchestrator | Saturday 03 May 2025 00:59:30 +0000 (0:00:03.085) 0:00:21.675 ********** 2025-05-03 01:02:15.507110 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-03 01:02:15.507124 | orchestrator | 2025-05-03 01:02:15.507139 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-03 01:02:15.507152 | orchestrator | Saturday 03 May 2025 00:59:35 +0000 (0:00:04.589) 0:00:26.265 ********** 2025-05-03 01:02:15.507169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.507273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.507295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.507323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.507617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.507647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.507669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.507684 | orchestrator | 2025-05-03 01:02:15.507699 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-03 01:02:15.507720 | orchestrator | Saturday 03 May 2025 00:59:38 +0000 (0:00:02.981) 0:00:29.246 ********** 2025-05-03 01:02:15.507734 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:15.507749 | orchestrator | 2025-05-03 01:02:15.507763 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-03 01:02:15.507805 | orchestrator | Saturday 03 May 2025 00:59:38 +0000 (0:00:00.117) 0:00:29.363 ********** 2025-05-03 01:02:15.507820 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:15.507834 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:15.507848 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:15.507862 | orchestrator | 2025-05-03 01:02:15.507876 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-03 01:02:15.507890 | orchestrator | Saturday 03 May 2025 00:59:38 +0000 (0:00:00.398) 0:00:29.762 ********** 2025-05-03 01:02:15.507904 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:02:15.507919 | orchestrator | 2025-05-03 01:02:15.507932 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-03 01:02:15.507946 | orchestrator | Saturday 03 May 2025 00:59:39 +0000 (0:00:00.598) 0:00:30.361 ********** 2025-05-03 01:02:15.507962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.507977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.507992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.508030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.508372 | orchestrator | 2025-05-03 01:02:15.508386 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-03 01:02:15.508401 | orchestrator | Saturday 03 May 2025 00:59:45 +0000 (0:00:06.298) 0:00:36.659 ********** 2025-05-03 01:02:15.508416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.508430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.508446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508579 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:15.508594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.508609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.508624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508726 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:15.508741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.508756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.508770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.508894 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:15.508908 | orchestrator | 2025-05-03 01:02:15.508923 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-03 01:02:15.508937 | orchestrator | Saturday 03 May 2025 00:59:47 +0000 (0:00:01.425) 0:00:38.084 ********** 2025-05-03 01:02:15.508951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.508966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.508981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509082 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:15.509096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.509111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.509126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509232 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:15.509247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.509267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.509290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509384 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:15.509398 | orchestrator | 2025-05-03 01:02:15.509412 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-03 01:02:15.509426 | orchestrator | Saturday 03 May 2025 00:59:48 +0000 (0:00:01.202) 0:00:39.287 ********** 2025-05-03 01:02:15.509441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.509456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.509478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.509493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.509888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.509903 | orchestrator | 2025-05-03 01:02:15.509917 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-03 01:02:15.509931 | orchestrator | Saturday 03 May 2025 00:59:54 +0000 (0:00:06.092) 0:00:45.379 ********** 2025-05-03 01:02:15.509946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.509961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.510007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.510076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510441 | orchestrator | 2025-05-03 01:02:15.510456 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-03 01:02:15.510470 | orchestrator | Saturday 03 May 2025 01:00:22 +0000 (0:00:27.774) 0:01:13.154 ********** 2025-05-03 01:02:15.510484 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-03 01:02:15.510498 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-03 01:02:15.510513 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-03 01:02:15.510526 | orchestrator | 2025-05-03 01:02:15.510540 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-03 01:02:15.510554 | orchestrator | Saturday 03 May 2025 01:00:28 +0000 (0:00:06.681) 0:01:19.836 ********** 2025-05-03 01:02:15.510568 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-03 01:02:15.510587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-03 01:02:15.510608 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-03 01:02:15.510622 | orchestrator | 2025-05-03 01:02:15.510636 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-03 01:02:15.510650 | orchestrator | Saturday 03 May 2025 01:00:33 +0000 (0:00:04.965) 0:01:24.801 ********** 2025-05-03 01:02:15.510664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.510679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.510695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.510709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.510967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.510993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.511024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511038 | orchestrator | 2025-05-03 01:02:15.511052 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-03 01:02:15.511067 | orchestrator | Saturday 03 May 2025 01:00:37 +0000 (0:00:03.527) 0:01:28.329 ********** 2025-05-03 01:02:15.511081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.511096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.511125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.511141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.511156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.511229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.511289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.511361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.511390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.511419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511441 | orchestrator | 2025-05-03 01:02:15.511455 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-03 01:02:15.511469 | orchestrator | Saturday 03 May 2025 01:00:40 +0000 (0:00:03.424) 0:01:31.753 ********** 2025-05-03 01:02:15.511483 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:15.511497 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:15.511511 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:15.511525 | orchestrator | 2025-05-03 01:02:15.511539 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-03 01:02:15.511553 | orchestrator | Saturday 03 May 2025 01:00:41 +0000 (0:00:00.481) 0:01:32.235 ********** 2025-05-03 01:02:15.511573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.511589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.511605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511691 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:15.511706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.511721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.511736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511855 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:15.511870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-03 01:02:15.511884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-03 01:02:15.511906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.511985 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:15.511999 | orchestrator | 2025-05-03 01:02:15.512013 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-03 01:02:15.512028 | orchestrator | Saturday 03 May 2025 01:00:42 +0000 (0:00:01.520) 0:01:33.756 ********** 2025-05-03 01:02:15.512042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.512063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.512078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-03 01:02:15.512099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.512363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.512399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-03 01:02:15.512414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-03 01:02:15.512429 | orchestrator | 2025-05-03 01:02:15.512444 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-03 01:02:15.512458 | orchestrator | Saturday 03 May 2025 01:00:47 +0000 (0:00:05.103) 0:01:38.860 ********** 2025-05-03 01:02:15.512472 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:15.512486 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:15.512500 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:15.512514 | orchestrator | 2025-05-03 01:02:15.512528 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-03 01:02:15.512542 | orchestrator | Saturday 03 May 2025 01:00:48 +0000 (0:00:00.703) 0:01:39.564 ********** 2025-05-03 01:02:15.512556 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-03 01:02:15.512570 | orchestrator | 2025-05-03 01:02:15.512584 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-03 01:02:15.512598 | orchestrator | Saturday 03 May 2025 01:00:50 +0000 (0:00:02.296) 0:01:41.861 ********** 2025-05-03 01:02:15.512612 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-03 01:02:15.512626 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-03 01:02:15.512640 | orchestrator | 2025-05-03 01:02:15.512654 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-03 01:02:15.512668 | orchestrator | Saturday 03 May 2025 01:00:53 +0000 (0:00:02.414) 0:01:44.276 ********** 2025-05-03 01:02:15.512681 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:15.512696 | orchestrator | 2025-05-03 01:02:15.512710 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-03 01:02:15.512724 | orchestrator | Saturday 03 May 2025 01:01:06 +0000 (0:00:13.723) 0:01:58.000 ********** 2025-05-03 01:02:15.512737 | orchestrator | 2025-05-03 01:02:15.512751 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-03 01:02:15.512765 | orchestrator | Saturday 03 May 2025 01:01:07 +0000 (0:00:00.054) 0:01:58.054 ********** 2025-05-03 01:02:15.512800 | orchestrator | 2025-05-03 01:02:15.512819 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-03 01:02:15.512840 | orchestrator | Saturday 03 May 2025 01:01:07 +0000 (0:00:00.099) 0:01:58.154 ********** 2025-05-03 01:02:15.512855 | orchestrator | 2025-05-03 01:02:15.512868 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-03 01:02:15.512882 | orchestrator | Saturday 03 May 2025 01:01:07 +0000 (0:00:00.097) 0:01:58.251 ********** 2025-05-03 01:02:15.512896 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:15.512918 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:02:15.512932 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:02:15.512946 | orchestrator | 2025-05-03 01:02:15.512960 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-03 01:02:15.512974 | orchestrator | Saturday 03 May 2025 01:01:20 +0000 (0:00:13.419) 0:02:11.671 ********** 2025-05-03 01:02:15.512988 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:15.513002 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:02:15.513016 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:02:15.513030 | orchestrator | 2025-05-03 01:02:15.513044 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-03 01:02:15.513058 | orchestrator | Saturday 03 May 2025 01:01:32 +0000 (0:00:11.695) 0:02:23.366 ********** 2025-05-03 01:02:15.513072 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:15.513086 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:02:15.513100 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:02:15.513114 | orchestrator | 2025-05-03 01:02:15.513128 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-03 01:02:15.513142 | orchestrator | Saturday 03 May 2025 01:01:38 +0000 (0:00:06.550) 0:02:29.917 ********** 2025-05-03 01:02:15.513156 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:15.513170 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:02:15.513184 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:02:15.513198 | orchestrator | 2025-05-03 01:02:15.513212 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-03 01:02:15.513226 | orchestrator | Saturday 03 May 2025 01:01:46 +0000 (0:00:07.264) 0:02:37.181 ********** 2025-05-03 01:02:15.513240 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:15.513254 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:02:15.513268 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:02:15.513282 | orchestrator | 2025-05-03 01:02:15.513296 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-03 01:02:15.513310 | orchestrator | Saturday 03 May 2025 01:01:56 +0000 (0:00:10.248) 0:02:47.430 ********** 2025-05-03 01:02:15.513324 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:15.513338 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:02:15.513352 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:02:15.513367 | orchestrator | 2025-05-03 01:02:15.513381 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-03 01:02:15.513395 | orchestrator | Saturday 03 May 2025 01:02:07 +0000 (0:00:10.990) 0:02:58.421 ********** 2025-05-03 01:02:15.513409 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:15.513431 | orchestrator | 2025-05-03 01:02:15.513445 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:02:15.513460 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:02:15.513475 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 01:02:15.513489 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 01:02:15.513503 | orchestrator | 2025-05-03 01:02:15.513517 | orchestrator | 2025-05-03 01:02:15.513531 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:02:15.513545 | orchestrator | Saturday 03 May 2025 01:02:12 +0000 (0:00:04.882) 0:03:03.304 ********** 2025-05-03 01:02:15.513559 | orchestrator | =============================================================================== 2025-05-03 01:02:15.513573 | orchestrator | designate : Copying over designate.conf -------------------------------- 27.77s 2025-05-03 01:02:15.513587 | orchestrator | designate : Running Designate bootstrap container ---------------------- 13.72s 2025-05-03 01:02:15.513601 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.42s 2025-05-03 01:02:15.513622 | orchestrator | designate : Restart designate-api container ---------------------------- 11.70s 2025-05-03 01:02:15.513636 | orchestrator | designate : Restart designate-worker container ------------------------- 10.99s 2025-05-03 01:02:15.513650 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.25s 2025-05-03 01:02:15.513664 | orchestrator | designate : Restart designate-producer container ------------------------ 7.26s 2025-05-03 01:02:15.513678 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.68s 2025-05-03 01:02:15.513692 | orchestrator | designate : Restart designate-central container ------------------------- 6.55s 2025-05-03 01:02:15.513706 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.30s 2025-05-03 01:02:15.513720 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.30s 2025-05-03 01:02:15.513734 | orchestrator | designate : Copying over config.json files for services ----------------- 6.09s 2025-05-03 01:02:15.513748 | orchestrator | designate : Check designate containers ---------------------------------- 5.10s 2025-05-03 01:02:15.513770 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.97s 2025-05-03 01:02:15.513842 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 4.88s 2025-05-03 01:02:15.513857 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.59s 2025-05-03 01:02:15.513871 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.80s 2025-05-03 01:02:15.513891 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.59s 2025-05-03 01:02:18.554761 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.53s 2025-05-03 01:02:18.554967 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.42s 2025-05-03 01:02:18.554990 | orchestrator | 2025-05-03 01:02:15 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:18.555006 | orchestrator | 2025-05-03 01:02:15 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:18.555029 | orchestrator | 2025-05-03 01:02:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:18.555054 | orchestrator | 2025-05-03 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:18.555097 | orchestrator | 2025-05-03 01:02:18 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:18.556352 | orchestrator | 2025-05-03 01:02:18 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state STARTED 2025-05-03 01:02:18.559010 | orchestrator | 2025-05-03 01:02:18 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:18.561530 | orchestrator | 2025-05-03 01:02:18 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:18.563541 | orchestrator | 2025-05-03 01:02:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:18.563831 | orchestrator | 2025-05-03 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:21.614106 | orchestrator | 2025-05-03 01:02:21 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:21.615116 | orchestrator | 2025-05-03 01:02:21 | INFO  | Task d3f6fe28-5013-4e2a-a4b3-01cbff35bfff is in state SUCCESS 2025-05-03 01:02:21.616701 | orchestrator | 2025-05-03 01:02:21.616749 | orchestrator | 2025-05-03 01:02:21.616765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:02:21.616811 | orchestrator | 2025-05-03 01:02:21.616825 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:02:21.616840 | orchestrator | Saturday 03 May 2025 01:01:14 +0000 (0:00:00.264) 0:00:00.264 ********** 2025-05-03 01:02:21.616854 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:02:21.616898 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:02:21.616914 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:02:21.616928 | orchestrator | 2025-05-03 01:02:21.616942 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:02:21.616957 | orchestrator | Saturday 03 May 2025 01:01:14 +0000 (0:00:00.413) 0:00:00.678 ********** 2025-05-03 01:02:21.616971 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-03 01:02:21.616985 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-03 01:02:21.616999 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-03 01:02:21.617013 | orchestrator | 2025-05-03 01:02:21.617027 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-03 01:02:21.617041 | orchestrator | 2025-05-03 01:02:21.617055 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-03 01:02:21.617069 | orchestrator | Saturday 03 May 2025 01:01:15 +0000 (0:00:00.316) 0:00:00.994 ********** 2025-05-03 01:02:21.617084 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:02:21.617099 | orchestrator | 2025-05-03 01:02:21.617114 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-03 01:02:21.617128 | orchestrator | Saturday 03 May 2025 01:01:15 +0000 (0:00:00.650) 0:00:01.644 ********** 2025-05-03 01:02:21.617142 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-03 01:02:21.617156 | orchestrator | 2025-05-03 01:02:21.617169 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-03 01:02:21.617183 | orchestrator | Saturday 03 May 2025 01:01:18 +0000 (0:00:03.252) 0:00:04.897 ********** 2025-05-03 01:02:21.617197 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-03 01:02:21.617211 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-03 01:02:21.617228 | orchestrator | 2025-05-03 01:02:21.617252 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-03 01:02:21.617276 | orchestrator | Saturday 03 May 2025 01:01:25 +0000 (0:00:06.452) 0:00:11.350 ********** 2025-05-03 01:02:21.617299 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:02:21.617324 | orchestrator | 2025-05-03 01:02:21.617349 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-03 01:02:21.617370 | orchestrator | Saturday 03 May 2025 01:01:28 +0000 (0:00:03.336) 0:00:14.687 ********** 2025-05-03 01:02:21.617387 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:02:21.617403 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-03 01:02:21.617417 | orchestrator | 2025-05-03 01:02:21.617431 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-03 01:02:21.617445 | orchestrator | Saturday 03 May 2025 01:01:32 +0000 (0:00:03.766) 0:00:18.453 ********** 2025-05-03 01:02:21.617459 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:02:21.617472 | orchestrator | 2025-05-03 01:02:21.617487 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-03 01:02:21.617502 | orchestrator | Saturday 03 May 2025 01:01:35 +0000 (0:00:03.169) 0:00:21.622 ********** 2025-05-03 01:02:21.617516 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-03 01:02:21.617529 | orchestrator | 2025-05-03 01:02:21.617543 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-03 01:02:21.617557 | orchestrator | Saturday 03 May 2025 01:01:39 +0000 (0:00:04.270) 0:00:25.893 ********** 2025-05-03 01:02:21.617571 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:21.617585 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:21.617599 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:21.617612 | orchestrator | 2025-05-03 01:02:21.617626 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-03 01:02:21.617665 | orchestrator | Saturday 03 May 2025 01:01:41 +0000 (0:00:01.313) 0:00:27.206 ********** 2025-05-03 01:02:21.617683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.617718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.617792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.617819 | orchestrator | 2025-05-03 01:02:21.617833 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-03 01:02:21.617848 | orchestrator | Saturday 03 May 2025 01:01:42 +0000 (0:00:01.655) 0:00:28.862 ********** 2025-05-03 01:02:21.617862 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:21.617876 | orchestrator | 2025-05-03 01:02:21.617890 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-03 01:02:21.617903 | orchestrator | Saturday 03 May 2025 01:01:43 +0000 (0:00:00.102) 0:00:28.964 ********** 2025-05-03 01:02:21.617917 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:21.617931 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:21.617945 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:21.617959 | orchestrator | 2025-05-03 01:02:21.617973 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-03 01:02:21.617987 | orchestrator | Saturday 03 May 2025 01:01:43 +0000 (0:00:00.282) 0:00:29.247 ********** 2025-05-03 01:02:21.618001 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:02:21.618072 | orchestrator | 2025-05-03 01:02:21.618090 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-03 01:02:21.618104 | orchestrator | Saturday 03 May 2025 01:01:43 +0000 (0:00:00.688) 0:00:29.935 ********** 2025-05-03 01:02:21.618119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618176 | orchestrator | 2025-05-03 01:02:21.618190 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-03 01:02:21.618205 | orchestrator | Saturday 03 May 2025 01:01:45 +0000 (0:00:02.000) 0:00:31.935 ********** 2025-05-03 01:02:21.618232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.618255 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:21.618269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.618284 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:21.618308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.618334 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:21.618359 | orchestrator | 2025-05-03 01:02:21.618385 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-03 01:02:21.618409 | orchestrator | Saturday 03 May 2025 01:01:46 +0000 (0:00:00.548) 0:00:32.484 ********** 2025-05-03 01:02:21.618439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.618454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.618479 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:21.618493 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:21.618507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.618521 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:21.618536 | orchestrator | 2025-05-03 01:02:21.618549 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-03 01:02:21.618563 | orchestrator | Saturday 03 May 2025 01:01:47 +0000 (0:00:00.962) 0:00:33.447 ********** 2025-05-03 01:02:21.618587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618649 | orchestrator | 2025-05-03 01:02:21.618664 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-03 01:02:21.618678 | orchestrator | Saturday 03 May 2025 01:01:49 +0000 (0:00:01.561) 0:00:35.008 ********** 2025-05-03 01:02:21.618692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.618745 | orchestrator | 2025-05-03 01:02:21.618759 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-03 01:02:21.618814 | orchestrator | Saturday 03 May 2025 01:01:51 +0000 (0:00:02.006) 0:00:37.014 ********** 2025-05-03 01:02:21.618830 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-03 01:02:21.618844 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-03 01:02:21.618865 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-03 01:02:21.618880 | orchestrator | 2025-05-03 01:02:21.618893 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-03 01:02:21.618907 | orchestrator | Saturday 03 May 2025 01:01:52 +0000 (0:00:01.636) 0:00:38.651 ********** 2025-05-03 01:02:21.618921 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:21.618935 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:02:21.618949 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:02:21.618963 | orchestrator | 2025-05-03 01:02:21.618977 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-03 01:02:21.618991 | orchestrator | Saturday 03 May 2025 01:01:54 +0000 (0:00:01.481) 0:00:40.133 ********** 2025-05-03 01:02:21.619016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.619031 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:02:21.619045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.619060 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:02:21.619084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-03 01:02:21.619099 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:02:21.619113 | orchestrator | 2025-05-03 01:02:21.619127 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-03 01:02:21.619141 | orchestrator | Saturday 03 May 2025 01:01:54 +0000 (0:00:00.773) 0:00:40.907 ********** 2025-05-03 01:02:21.619165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.619194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.619210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-03 01:02:21.619225 | orchestrator | 2025-05-03 01:02:21.619239 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-03 01:02:21.619252 | orchestrator | Saturday 03 May 2025 01:01:56 +0000 (0:00:01.257) 0:00:42.164 ********** 2025-05-03 01:02:21.619266 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:21.619280 | orchestrator | 2025-05-03 01:02:21.619294 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-03 01:02:21.619308 | orchestrator | Saturday 03 May 2025 01:01:58 +0000 (0:00:02.522) 0:00:44.687 ********** 2025-05-03 01:02:21.619322 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:21.619336 | orchestrator | 2025-05-03 01:02:21.619359 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-03 01:02:21.619385 | orchestrator | Saturday 03 May 2025 01:02:00 +0000 (0:00:02.221) 0:00:46.909 ********** 2025-05-03 01:02:21.619423 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:21.620663 | orchestrator | 2025-05-03 01:02:21.620802 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-03 01:02:21.620843 | orchestrator | Saturday 03 May 2025 01:02:14 +0000 (0:00:13.121) 0:01:00.030 ********** 2025-05-03 01:02:21.620883 | orchestrator | 2025-05-03 01:02:21.620898 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-03 01:02:21.620912 | orchestrator | Saturday 03 May 2025 01:02:14 +0000 (0:00:00.063) 0:01:00.094 ********** 2025-05-03 01:02:21.620926 | orchestrator | 2025-05-03 01:02:21.620940 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-03 01:02:21.620954 | orchestrator | Saturday 03 May 2025 01:02:14 +0000 (0:00:00.225) 0:01:00.319 ********** 2025-05-03 01:02:21.620968 | orchestrator | 2025-05-03 01:02:21.620982 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-03 01:02:21.620996 | orchestrator | Saturday 03 May 2025 01:02:14 +0000 (0:00:00.077) 0:01:00.397 ********** 2025-05-03 01:02:21.621010 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:02:21.621025 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:02:21.621039 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:02:21.621053 | orchestrator | 2025-05-03 01:02:21.621067 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:02:21.621083 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-03 01:02:21.621099 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 01:02:21.621114 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-03 01:02:21.621128 | orchestrator | 2025-05-03 01:02:21.621142 | orchestrator | 2025-05-03 01:02:21.621156 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:02:21.621170 | orchestrator | Saturday 03 May 2025 01:02:19 +0000 (0:00:05.466) 0:01:05.864 ********** 2025-05-03 01:02:21.621184 | orchestrator | =============================================================================== 2025-05-03 01:02:21.621198 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.12s 2025-05-03 01:02:21.621212 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.45s 2025-05-03 01:02:21.621226 | orchestrator | placement : Restart placement-api container ----------------------------- 5.47s 2025-05-03 01:02:21.621240 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.27s 2025-05-03 01:02:21.621254 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.77s 2025-05-03 01:02:21.621268 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.34s 2025-05-03 01:02:21.621282 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.25s 2025-05-03 01:02:21.621298 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.17s 2025-05-03 01:02:21.621315 | orchestrator | placement : Creating placement databases -------------------------------- 2.52s 2025-05-03 01:02:21.621331 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.22s 2025-05-03 01:02:21.621347 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.01s 2025-05-03 01:02:21.621363 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.00s 2025-05-03 01:02:21.621380 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.66s 2025-05-03 01:02:21.621396 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.64s 2025-05-03 01:02:21.621413 | orchestrator | placement : Copying over config.json files for services ----------------- 1.56s 2025-05-03 01:02:21.621427 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.48s 2025-05-03 01:02:21.621441 | orchestrator | placement : include_tasks ----------------------------------------------- 1.31s 2025-05-03 01:02:21.621455 | orchestrator | placement : Check placement containers ---------------------------------- 1.26s 2025-05-03 01:02:21.621469 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.96s 2025-05-03 01:02:21.621491 | orchestrator | placement : Copying over existing policy file --------------------------- 0.77s 2025-05-03 01:02:21.621506 | orchestrator | 2025-05-03 01:02:21 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:21.621520 | orchestrator | 2025-05-03 01:02:21 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:21.621534 | orchestrator | 2025-05-03 01:02:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:21.621564 | orchestrator | 2025-05-03 01:02:21 | INFO  | Task 1a058656-1865-4acb-836f-629d071d06c8 is in state STARTED 2025-05-03 01:02:24.679416 | orchestrator | 2025-05-03 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:24.679552 | orchestrator | 2025-05-03 01:02:24 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:24.681550 | orchestrator | 2025-05-03 01:02:24 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:24.683390 | orchestrator | 2025-05-03 01:02:24 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:24.685844 | orchestrator | 2025-05-03 01:02:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:24.687721 | orchestrator | 2025-05-03 01:02:24 | INFO  | Task 1a058656-1865-4acb-836f-629d071d06c8 is in state STARTED 2025-05-03 01:02:27.747537 | orchestrator | 2025-05-03 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:27.747716 | orchestrator | 2025-05-03 01:02:27 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:27.748694 | orchestrator | 2025-05-03 01:02:27 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:27.750150 | orchestrator | 2025-05-03 01:02:27 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:27.751493 | orchestrator | 2025-05-03 01:02:27 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:27.753079 | orchestrator | 2025-05-03 01:02:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:27.754718 | orchestrator | 2025-05-03 01:02:27 | INFO  | Task 1a058656-1865-4acb-836f-629d071d06c8 is in state SUCCESS 2025-05-03 01:02:30.804017 | orchestrator | 2025-05-03 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:30.804155 | orchestrator | 2025-05-03 01:02:30 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:30.805621 | orchestrator | 2025-05-03 01:02:30 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:30.806536 | orchestrator | 2025-05-03 01:02:30 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:30.807906 | orchestrator | 2025-05-03 01:02:30 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:30.809302 | orchestrator | 2025-05-03 01:02:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:30.809428 | orchestrator | 2025-05-03 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:33.859512 | orchestrator | 2025-05-03 01:02:33 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:33.859647 | orchestrator | 2025-05-03 01:02:33 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:33.860625 | orchestrator | 2025-05-03 01:02:33 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:33.861982 | orchestrator | 2025-05-03 01:02:33 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:33.862803 | orchestrator | 2025-05-03 01:02:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:36.939104 | orchestrator | 2025-05-03 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:36.939222 | orchestrator | 2025-05-03 01:02:36 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:36.940035 | orchestrator | 2025-05-03 01:02:36 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:36.940078 | orchestrator | 2025-05-03 01:02:36 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:36.941133 | orchestrator | 2025-05-03 01:02:36 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:36.942984 | orchestrator | 2025-05-03 01:02:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:39.981797 | orchestrator | 2025-05-03 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:39.981930 | orchestrator | 2025-05-03 01:02:39 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:39.983942 | orchestrator | 2025-05-03 01:02:39 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:39.984541 | orchestrator | 2025-05-03 01:02:39 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:39.984959 | orchestrator | 2025-05-03 01:02:39 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:39.987169 | orchestrator | 2025-05-03 01:02:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:39.987416 | orchestrator | 2025-05-03 01:02:39 | INFO  | Task 0fd5c0c0-30cb-4e5e-9e47-0d80f8fb05fd is in state STARTED 2025-05-03 01:02:43.023867 | orchestrator | 2025-05-03 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:43.023988 | orchestrator | 2025-05-03 01:02:43 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:43.024823 | orchestrator | 2025-05-03 01:02:43 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:43.024868 | orchestrator | 2025-05-03 01:02:43 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:43.025687 | orchestrator | 2025-05-03 01:02:43 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:43.028333 | orchestrator | 2025-05-03 01:02:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:43.029328 | orchestrator | 2025-05-03 01:02:43 | INFO  | Task 0fd5c0c0-30cb-4e5e-9e47-0d80f8fb05fd is in state STARTED 2025-05-03 01:02:46.061064 | orchestrator | 2025-05-03 01:02:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:46.061191 | orchestrator | 2025-05-03 01:02:46 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:46.061882 | orchestrator | 2025-05-03 01:02:46 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:46.062486 | orchestrator | 2025-05-03 01:02:46 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:46.063259 | orchestrator | 2025-05-03 01:02:46 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:46.063995 | orchestrator | 2025-05-03 01:02:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:46.064859 | orchestrator | 2025-05-03 01:02:46 | INFO  | Task 0fd5c0c0-30cb-4e5e-9e47-0d80f8fb05fd is in state STARTED 2025-05-03 01:02:49.120353 | orchestrator | 2025-05-03 01:02:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:49.120488 | orchestrator | 2025-05-03 01:02:49 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:49.123642 | orchestrator | 2025-05-03 01:02:49 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:49.124433 | orchestrator | 2025-05-03 01:02:49 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:49.124467 | orchestrator | 2025-05-03 01:02:49 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:49.124484 | orchestrator | 2025-05-03 01:02:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:49.124507 | orchestrator | 2025-05-03 01:02:49 | INFO  | Task 0fd5c0c0-30cb-4e5e-9e47-0d80f8fb05fd is in state SUCCESS 2025-05-03 01:02:52.165540 | orchestrator | 2025-05-03 01:02:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:52.165687 | orchestrator | 2025-05-03 01:02:52 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:52.166196 | orchestrator | 2025-05-03 01:02:52 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:52.166663 | orchestrator | 2025-05-03 01:02:52 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:52.167621 | orchestrator | 2025-05-03 01:02:52 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:52.168882 | orchestrator | 2025-05-03 01:02:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:55.221367 | orchestrator | 2025-05-03 01:02:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:55.221504 | orchestrator | 2025-05-03 01:02:55 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:02:55.222969 | orchestrator | 2025-05-03 01:02:55 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:02:55.224387 | orchestrator | 2025-05-03 01:02:55 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:02:55.225523 | orchestrator | 2025-05-03 01:02:55 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:02:55.226952 | orchestrator | 2025-05-03 01:02:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:02:55.227070 | orchestrator | 2025-05-03 01:02:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:02:58.280237 | orchestrator | 2025-05-03 01:02:58 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:01.323132 | orchestrator | 2025-05-03 01:02:58 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:01.323238 | orchestrator | 2025-05-03 01:02:58 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:01.323258 | orchestrator | 2025-05-03 01:02:58 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:01.323447 | orchestrator | 2025-05-03 01:02:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:01.323469 | orchestrator | 2025-05-03 01:02:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:01.323500 | orchestrator | 2025-05-03 01:03:01 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:01.325147 | orchestrator | 2025-05-03 01:03:01 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:01.325208 | orchestrator | 2025-05-03 01:03:01 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:01.325393 | orchestrator | 2025-05-03 01:03:01 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:01.325419 | orchestrator | 2025-05-03 01:03:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:01.325440 | orchestrator | 2025-05-03 01:03:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:04.361398 | orchestrator | 2025-05-03 01:03:04 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:04.363214 | orchestrator | 2025-05-03 01:03:04 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:04.363246 | orchestrator | 2025-05-03 01:03:04 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:04.363267 | orchestrator | 2025-05-03 01:03:04 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:04.363925 | orchestrator | 2025-05-03 01:03:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:07.402699 | orchestrator | 2025-05-03 01:03:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:07.402973 | orchestrator | 2025-05-03 01:03:07 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:07.403477 | orchestrator | 2025-05-03 01:03:07 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:07.403506 | orchestrator | 2025-05-03 01:03:07 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:07.403543 | orchestrator | 2025-05-03 01:03:07 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:07.404114 | orchestrator | 2025-05-03 01:03:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:07.405329 | orchestrator | 2025-05-03 01:03:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:10.437050 | orchestrator | 2025-05-03 01:03:10 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:10.437335 | orchestrator | 2025-05-03 01:03:10 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:10.437816 | orchestrator | 2025-05-03 01:03:10 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:10.438462 | orchestrator | 2025-05-03 01:03:10 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:10.438919 | orchestrator | 2025-05-03 01:03:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:10.438990 | orchestrator | 2025-05-03 01:03:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:13.469787 | orchestrator | 2025-05-03 01:03:13 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:13.470521 | orchestrator | 2025-05-03 01:03:13 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:13.473522 | orchestrator | 2025-05-03 01:03:13 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:13.474012 | orchestrator | 2025-05-03 01:03:13 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:13.477736 | orchestrator | 2025-05-03 01:03:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:16.510146 | orchestrator | 2025-05-03 01:03:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:16.510297 | orchestrator | 2025-05-03 01:03:16 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:16.510887 | orchestrator | 2025-05-03 01:03:16 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:16.511749 | orchestrator | 2025-05-03 01:03:16 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:16.512314 | orchestrator | 2025-05-03 01:03:16 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:16.513034 | orchestrator | 2025-05-03 01:03:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:19.541119 | orchestrator | 2025-05-03 01:03:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:19.541242 | orchestrator | 2025-05-03 01:03:19 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:19.541419 | orchestrator | 2025-05-03 01:03:19 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:19.543007 | orchestrator | 2025-05-03 01:03:19 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:19.544346 | orchestrator | 2025-05-03 01:03:19 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:19.544610 | orchestrator | 2025-05-03 01:03:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:22.576314 | orchestrator | 2025-05-03 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:22.576428 | orchestrator | 2025-05-03 01:03:22 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:22.576823 | orchestrator | 2025-05-03 01:03:22 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:22.577470 | orchestrator | 2025-05-03 01:03:22 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:22.578673 | orchestrator | 2025-05-03 01:03:22 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:25.617130 | orchestrator | 2025-05-03 01:03:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:25.617752 | orchestrator | 2025-05-03 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:25.617788 | orchestrator | 2025-05-03 01:03:25 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:25.617976 | orchestrator | 2025-05-03 01:03:25 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:25.620419 | orchestrator | 2025-05-03 01:03:25 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:25.621262 | orchestrator | 2025-05-03 01:03:25 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:25.622559 | orchestrator | 2025-05-03 01:03:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:25.623842 | orchestrator | 2025-05-03 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:28.677901 | orchestrator | 2025-05-03 01:03:28 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:28.680908 | orchestrator | 2025-05-03 01:03:28 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:28.683468 | orchestrator | 2025-05-03 01:03:28 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:28.684902 | orchestrator | 2025-05-03 01:03:28 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:28.688223 | orchestrator | 2025-05-03 01:03:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:28.689830 | orchestrator | 2025-05-03 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:31.740297 | orchestrator | 2025-05-03 01:03:31 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:31.742213 | orchestrator | 2025-05-03 01:03:31 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:31.744482 | orchestrator | 2025-05-03 01:03:31 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:31.747535 | orchestrator | 2025-05-03 01:03:31 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:31.753653 | orchestrator | 2025-05-03 01:03:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:34.809784 | orchestrator | 2025-05-03 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:34.809926 | orchestrator | 2025-05-03 01:03:34 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:34.811414 | orchestrator | 2025-05-03 01:03:34 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:34.812659 | orchestrator | 2025-05-03 01:03:34 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:34.814103 | orchestrator | 2025-05-03 01:03:34 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:34.815563 | orchestrator | 2025-05-03 01:03:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:34.815760 | orchestrator | 2025-05-03 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:37.897948 | orchestrator | 2025-05-03 01:03:37 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:37.900480 | orchestrator | 2025-05-03 01:03:37 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:37.902748 | orchestrator | 2025-05-03 01:03:37 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:37.904915 | orchestrator | 2025-05-03 01:03:37 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:37.906058 | orchestrator | 2025-05-03 01:03:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:40.954845 | orchestrator | 2025-05-03 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:40.955022 | orchestrator | 2025-05-03 01:03:40 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:40.955793 | orchestrator | 2025-05-03 01:03:40 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:40.959407 | orchestrator | 2025-05-03 01:03:40 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:44.000905 | orchestrator | 2025-05-03 01:03:40 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:44.001012 | orchestrator | 2025-05-03 01:03:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:44.001026 | orchestrator | 2025-05-03 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:44.001052 | orchestrator | 2025-05-03 01:03:43 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:44.003013 | orchestrator | 2025-05-03 01:03:44 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:44.005234 | orchestrator | 2025-05-03 01:03:44 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:44.007079 | orchestrator | 2025-05-03 01:03:44 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:44.008985 | orchestrator | 2025-05-03 01:03:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:47.063843 | orchestrator | 2025-05-03 01:03:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:47.063984 | orchestrator | 2025-05-03 01:03:47 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:47.064313 | orchestrator | 2025-05-03 01:03:47 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:47.064349 | orchestrator | 2025-05-03 01:03:47 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:47.068208 | orchestrator | 2025-05-03 01:03:47 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:47.068828 | orchestrator | 2025-05-03 01:03:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:50.117606 | orchestrator | 2025-05-03 01:03:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:50.117788 | orchestrator | 2025-05-03 01:03:50 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:50.119915 | orchestrator | 2025-05-03 01:03:50 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:50.122184 | orchestrator | 2025-05-03 01:03:50 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state STARTED 2025-05-03 01:03:50.123403 | orchestrator | 2025-05-03 01:03:50 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:50.124859 | orchestrator | 2025-05-03 01:03:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:53.160929 | orchestrator | 2025-05-03 01:03:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:53.161084 | orchestrator | 2025-05-03 01:03:53 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:53.161980 | orchestrator | 2025-05-03 01:03:53 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:53.172886 | orchestrator | 2025-05-03 01:03:53 | INFO  | Task 8306177d-eb95-4353-b86c-2611ba5c93a6 is in state SUCCESS 2025-05-03 01:03:53.174544 | orchestrator | 2025-05-03 01:03:53.174591 | orchestrator | 2025-05-03 01:03:53.174607 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:03:53.174622 | orchestrator | 2025-05-03 01:03:53.174665 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:03:53.174753 | orchestrator | Saturday 03 May 2025 01:02:23 +0000 (0:00:00.254) 0:00:00.254 ********** 2025-05-03 01:03:53.175581 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:03:53.175604 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:03:53.175618 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:03:53.175633 | orchestrator | 2025-05-03 01:03:53.175647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:03:53.175661 | orchestrator | Saturday 03 May 2025 01:02:23 +0000 (0:00:00.425) 0:00:00.680 ********** 2025-05-03 01:03:53.175723 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-03 01:03:53.175738 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-03 01:03:53.176820 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-03 01:03:53.176848 | orchestrator | 2025-05-03 01:03:53.176864 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-03 01:03:53.176878 | orchestrator | 2025-05-03 01:03:53.176905 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-03 01:03:53.176941 | orchestrator | Saturday 03 May 2025 01:02:24 +0000 (0:00:00.523) 0:00:01.203 ********** 2025-05-03 01:03:53.176956 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:03:53.176970 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:03:53.178573 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:03:53.178609 | orchestrator | 2025-05-03 01:03:53.178626 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:03:53.178642 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:03:53.178657 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:03:53.178672 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:03:53.178714 | orchestrator | 2025-05-03 01:03:53.178729 | orchestrator | 2025-05-03 01:03:53.178743 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:03:53.178757 | orchestrator | Saturday 03 May 2025 01:02:25 +0000 (0:00:00.853) 0:00:02.057 ********** 2025-05-03 01:03:53.178771 | orchestrator | =============================================================================== 2025-05-03 01:03:53.178785 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.85s 2025-05-03 01:03:53.178799 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-05-03 01:03:53.178813 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-05-03 01:03:53.178826 | orchestrator | 2025-05-03 01:03:53.178841 | orchestrator | None 2025-05-03 01:03:53.178854 | orchestrator | 2025-05-03 01:03:53.178869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:03:53.178882 | orchestrator | 2025-05-03 01:03:53.178896 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:03:53.178910 | orchestrator | Saturday 03 May 2025 00:59:09 +0000 (0:00:00.278) 0:00:00.278 ********** 2025-05-03 01:03:53.178924 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:03:53.178938 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:03:53.178952 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:03:53.178966 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:03:53.178980 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:03:53.178994 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:03:53.179007 | orchestrator | 2025-05-03 01:03:53.179022 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:03:53.179036 | orchestrator | Saturday 03 May 2025 00:59:09 +0000 (0:00:00.840) 0:00:01.119 ********** 2025-05-03 01:03:53.179050 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-03 01:03:53.179064 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-03 01:03:53.179078 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-03 01:03:53.179092 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-03 01:03:53.179105 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-03 01:03:53.179119 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-03 01:03:53.179133 | orchestrator | 2025-05-03 01:03:53.179147 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-03 01:03:53.179162 | orchestrator | 2025-05-03 01:03:53.179175 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-03 01:03:53.179190 | orchestrator | Saturday 03 May 2025 00:59:10 +0000 (0:00:00.782) 0:00:01.901 ********** 2025-05-03 01:03:53.179204 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:03:53.179219 | orchestrator | 2025-05-03 01:03:53.179233 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-03 01:03:53.179247 | orchestrator | Saturday 03 May 2025 00:59:11 +0000 (0:00:01.113) 0:00:03.014 ********** 2025-05-03 01:03:53.179286 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:03:53.179302 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:03:53.179316 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:03:53.179330 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:03:53.179344 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:03:53.179358 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:03:53.179371 | orchestrator | 2025-05-03 01:03:53.179386 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-03 01:03:53.179399 | orchestrator | Saturday 03 May 2025 00:59:12 +0000 (0:00:01.180) 0:00:04.195 ********** 2025-05-03 01:03:53.179413 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:03:53.179427 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:03:53.179441 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:03:53.179454 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:03:53.179479 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:03:53.179510 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:03:53.179525 | orchestrator | 2025-05-03 01:03:53.179540 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-03 01:03:53.179554 | orchestrator | Saturday 03 May 2025 00:59:14 +0000 (0:00:01.086) 0:00:05.282 ********** 2025-05-03 01:03:53.179569 | orchestrator | ok: [testbed-node-0] => { 2025-05-03 01:03:53.179583 | orchestrator |  "changed": false, 2025-05-03 01:03:53.179597 | orchestrator |  "msg": "All assertions passed" 2025-05-03 01:03:53.179611 | orchestrator | } 2025-05-03 01:03:53.179625 | orchestrator | ok: [testbed-node-1] => { 2025-05-03 01:03:53.179639 | orchestrator |  "changed": false, 2025-05-03 01:03:53.179653 | orchestrator |  "msg": "All assertions passed" 2025-05-03 01:03:53.179667 | orchestrator | } 2025-05-03 01:03:53.179710 | orchestrator | ok: [testbed-node-2] => { 2025-05-03 01:03:53.179737 | orchestrator |  "changed": false, 2025-05-03 01:03:53.179763 | orchestrator |  "msg": "All assertions passed" 2025-05-03 01:03:53.179778 | orchestrator | } 2025-05-03 01:03:53.179792 | orchestrator | ok: [testbed-node-3] => { 2025-05-03 01:03:53.179806 | orchestrator |  "changed": false, 2025-05-03 01:03:53.179820 | orchestrator |  "msg": "All assertions passed" 2025-05-03 01:03:53.179834 | orchestrator | } 2025-05-03 01:03:53.179848 | orchestrator | ok: [testbed-node-4] => { 2025-05-03 01:03:53.179862 | orchestrator |  "changed": false, 2025-05-03 01:03:53.179876 | orchestrator |  "msg": "All assertions passed" 2025-05-03 01:03:53.179890 | orchestrator | } 2025-05-03 01:03:53.179903 | orchestrator | ok: [testbed-node-5] => { 2025-05-03 01:03:53.179917 | orchestrator |  "changed": false, 2025-05-03 01:03:53.179931 | orchestrator |  "msg": "All assertions passed" 2025-05-03 01:03:53.179945 | orchestrator | } 2025-05-03 01:03:53.179959 | orchestrator | 2025-05-03 01:03:53.179973 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-03 01:03:53.179987 | orchestrator | Saturday 03 May 2025 00:59:14 +0000 (0:00:00.610) 0:00:05.892 ********** 2025-05-03 01:03:53.180001 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.180015 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.180029 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.180043 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.180057 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.180071 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.180085 | orchestrator | 2025-05-03 01:03:53.180099 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-03 01:03:53.180113 | orchestrator | Saturday 03 May 2025 00:59:15 +0000 (0:00:00.718) 0:00:06.610 ********** 2025-05-03 01:03:53.180127 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-03 01:03:53.180148 | orchestrator | 2025-05-03 01:03:53.180163 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-03 01:03:53.180177 | orchestrator | Saturday 03 May 2025 00:59:18 +0000 (0:00:03.081) 0:00:09.692 ********** 2025-05-03 01:03:53.180191 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-03 01:03:53.180214 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-03 01:03:53.180229 | orchestrator | 2025-05-03 01:03:53.180243 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-03 01:03:53.180257 | orchestrator | Saturday 03 May 2025 00:59:24 +0000 (0:00:06.220) 0:00:15.913 ********** 2025-05-03 01:03:53.180271 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:03:53.180285 | orchestrator | 2025-05-03 01:03:53.180299 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-03 01:03:53.180313 | orchestrator | Saturday 03 May 2025 00:59:27 +0000 (0:00:03.322) 0:00:19.235 ********** 2025-05-03 01:03:53.180327 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:03:53.180341 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-03 01:03:53.180355 | orchestrator | 2025-05-03 01:03:53.180369 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-03 01:03:53.180383 | orchestrator | Saturday 03 May 2025 00:59:31 +0000 (0:00:03.716) 0:00:22.952 ********** 2025-05-03 01:03:53.180396 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:03:53.180410 | orchestrator | 2025-05-03 01:03:53.180425 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-03 01:03:53.180439 | orchestrator | Saturday 03 May 2025 00:59:35 +0000 (0:00:03.371) 0:00:26.323 ********** 2025-05-03 01:03:53.180453 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-03 01:03:53.180467 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-03 01:03:53.180481 | orchestrator | 2025-05-03 01:03:53.180495 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-03 01:03:53.180509 | orchestrator | Saturday 03 May 2025 00:59:43 +0000 (0:00:08.145) 0:00:34.468 ********** 2025-05-03 01:03:53.180523 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.180537 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.180579 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.180594 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.180608 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.180622 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.180635 | orchestrator | 2025-05-03 01:03:53.180649 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-03 01:03:53.180663 | orchestrator | Saturday 03 May 2025 00:59:43 +0000 (0:00:00.688) 0:00:35.157 ********** 2025-05-03 01:03:53.180714 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.180731 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.180745 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.180759 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.180773 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.180787 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.180801 | orchestrator | 2025-05-03 01:03:53.180815 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-03 01:03:53.180853 | orchestrator | Saturday 03 May 2025 00:59:46 +0000 (0:00:02.882) 0:00:38.039 ********** 2025-05-03 01:03:53.180868 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:03:53.180883 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:03:53.180897 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:03:53.180911 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:03:53.180924 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:03:53.180946 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:03:53.180960 | orchestrator | 2025-05-03 01:03:53.180975 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-03 01:03:53.180989 | orchestrator | Saturday 03 May 2025 00:59:49 +0000 (0:00:02.247) 0:00:40.286 ********** 2025-05-03 01:03:53.181003 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.181017 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.181031 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.181045 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.181066 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.181080 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.181094 | orchestrator | 2025-05-03 01:03:53.181108 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-03 01:03:53.181122 | orchestrator | Saturday 03 May 2025 00:59:51 +0000 (0:00:02.617) 0:00:42.904 ********** 2025-05-03 01:03:53.181140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.181159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.181236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.181294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.181310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.181360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.181399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.181414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.181456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.181480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.181518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.181578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.181614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.181672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.181839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.181936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.181951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.181981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.182011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.182426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.182454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.182467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.182541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.182558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.182584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.182635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.182649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.182708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.182722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.182761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.182829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.182850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.182878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.182912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.182965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.182986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.183018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.183039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.183077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.183117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.183132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.183181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.183203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.183243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.183278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.183330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.183372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.183404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.183431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.183444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.183484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.183503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.183530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.183570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.183583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.183624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.183637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.183655 | orchestrator | 2025-05-03 01:03:53.183669 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-03 01:03:53.183696 | orchestrator | Saturday 03 May 2025 00:59:55 +0000 (0:00:03.768) 0:00:46.672 ********** 2025-05-03 01:03:53.183710 | orchestrator | [WARNING]: Skipped 2025-05-03 01:03:53.183723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-03 01:03:53.183735 | orchestrator | due to this access issue: 2025-05-03 01:03:53.183753 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-03 01:03:53.183765 | orchestrator | a directory 2025-05-03 01:03:53.183778 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 01:03:53.183790 | orchestrator | 2025-05-03 01:03:53.183803 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-03 01:03:53.183816 | orchestrator | Saturday 03 May 2025 00:59:56 +0000 (0:00:00.835) 0:00:47.508 ********** 2025-05-03 01:03:53.183828 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:03:53.183842 | orchestrator | 2025-05-03 01:03:53.183854 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-03 01:03:53.183866 | orchestrator | Saturday 03 May 2025 00:59:57 +0000 (0:00:01.659) 0:00:49.168 ********** 2025-05-03 01:03:53.183879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.183898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.183912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.183930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.183952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.183966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.183978 | orchestrator | 2025-05-03 01:03:53.183991 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-03 01:03:53.184004 | orchestrator | Saturday 03 May 2025 01:00:03 +0000 (0:00:05.382) 0:00:54.551 ********** 2025-05-03 01:03:53.184023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.184036 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.184050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.184071 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.184091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.184105 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.184118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.184131 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.184144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.184157 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.184181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.184204 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.184217 | orchestrator | 2025-05-03 01:03:53.184230 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-03 01:03:53.184247 | orchestrator | Saturday 03 May 2025 01:00:07 +0000 (0:00:04.296) 0:00:58.847 ********** 2025-05-03 01:03:53.184267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.184281 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.184294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.184307 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.184319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.184332 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.184350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.184370 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.184391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.184412 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.184425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.184438 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.184451 | orchestrator | 2025-05-03 01:03:53.184463 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-03 01:03:53.184476 | orchestrator | Saturday 03 May 2025 01:00:12 +0000 (0:00:05.336) 0:01:04.184 ********** 2025-05-03 01:03:53.184488 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.184501 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.184513 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.184525 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.184538 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.184550 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.184563 | orchestrator | 2025-05-03 01:03:53.184575 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-03 01:03:53.184588 | orchestrator | Saturday 03 May 2025 01:00:17 +0000 (0:00:04.370) 0:01:08.554 ********** 2025-05-03 01:03:53.184600 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.184613 | orchestrator | 2025-05-03 01:03:53.184625 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-03 01:03:53.184637 | orchestrator | Saturday 03 May 2025 01:00:17 +0000 (0:00:00.100) 0:01:08.655 ********** 2025-05-03 01:03:53.184650 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.184662 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.184702 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.184716 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.184729 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.184741 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.184753 | orchestrator | 2025-05-03 01:03:53.184766 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-03 01:03:53.184779 | orchestrator | Saturday 03 May 2025 01:00:18 +0000 (0:00:00.681) 0:01:09.336 ********** 2025-05-03 01:03:53.184796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.184823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.185491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.185532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.185591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.185638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.185651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.185665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.185896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.186079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.186125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.186141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.186157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.186446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.186462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.186491 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.186508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.186555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.186579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.186621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.186636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186650 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.186665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.186733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.186795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.186850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.186865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.186895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.186939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.186966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.186984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.187010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.187026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187043 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.187060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.187090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.187168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.187215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.187238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.187254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.187296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.187357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.187408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.187432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.187494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.187509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.187531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.187566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.187582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.187641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.187656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187670 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.187711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.187727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.187742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187756 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.187780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.187805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.187880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.187920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.187935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.187979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.187995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.188016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.188078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.188094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188109 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.188134 | orchestrator | 2025-05-03 01:03:53.188149 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-03 01:03:53.188164 | orchestrator | Saturday 03 May 2025 01:00:21 +0000 (0:00:03.812) 0:01:13.149 ********** 2025-05-03 01:03:53.188178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.188194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.188275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.188342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.188433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.188528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.188558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.188626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.188641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.188690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.188737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.188849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.188863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.188960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.188975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.188990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.189066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.189120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.189147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.189186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.189260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.189355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.189370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.189424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.189495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.189561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.189597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.189736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.189778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.189836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.189862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.189899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.189937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.189952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.189973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.189997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.190054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190073 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.190088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.190125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.190152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.190190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.190205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190226 | orchestrator | 2025-05-03 01:03:53.190240 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-03 01:03:53.190255 | orchestrator | Saturday 03 May 2025 01:00:27 +0000 (0:00:05.116) 0:01:18.266 ********** 2025-05-03 01:03:53.190269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.190292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.190357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.190383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.190409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.190437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.190508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.190534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.190553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.190592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.190651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.190691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.190722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.190756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.190809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.194168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.194258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.194273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.194335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.194367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.194389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.194419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.194451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.194489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.194555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.194602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.194618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.194647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.194714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.194736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.194766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.194781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.194838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.194853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.194927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.194968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.194983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.194998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.195028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.195050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.195094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.195109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.195132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.195191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.195205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.195220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.195288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.195304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.195319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.195333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.195357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.195438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.195454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.195469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.195496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.195551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.195566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195588 | orchestrator | 2025-05-03 01:03:53.195603 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-03 01:03:53.195618 | orchestrator | Saturday 03 May 2025 01:00:33 +0000 (0:00:06.815) 0:01:25.081 ********** 2025-05-03 01:03:53.195641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.195657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.195747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.195796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.195811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.195854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.195891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.195913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.195953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.195968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.195989 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.196004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.196026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.196099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.196171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.196252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.196281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.196326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.196437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.196473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.196488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.196548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.196612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.196627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196648 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.196662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.196709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.196792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.196883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.196920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.196935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.196950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.196980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.196996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197017 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.197032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.197047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.197119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.197157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.197172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.197208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.197254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.197269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.197304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.197319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.197365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.197437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.197479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.197493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.197536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', 2025-05-03 01:03:53 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:53.197772 | orchestrator | 2025-05-03 01:03:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:53.197807 | orchestrator | 2025-05-03 01:03:53 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:03:53.197822 | orchestrator | 2025-05-03 01:03:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:53.197853 | orchestrator | '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.197886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.197901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.197932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.197959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.197974 | orchestrator | 2025-05-03 01:03:53.197989 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-03 01:03:53.198003 | orchestrator | Saturday 03 May 2025 01:00:37 +0000 (0:00:03.831) 0:01:28.912 ********** 2025-05-03 01:03:53.198048 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:03:53.198065 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.198079 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.198093 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.198107 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:03:53.198121 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:03:53.198135 | orchestrator | 2025-05-03 01:03:53.198147 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-03 01:03:53.198160 | orchestrator | Saturday 03 May 2025 01:00:43 +0000 (0:00:05.796) 0:01:34.709 ********** 2025-05-03 01:03:53.198184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.198199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.198265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.198299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.198312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.198350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.198377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.198398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.198432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.198451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198464 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.198478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.198498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.198563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.198597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.198611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.198644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.198691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.198716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.198750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.198763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198776 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.198795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.198816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.198881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.198895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.198950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.198963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.198989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.199035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.199061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.199079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.199093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.199151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.199164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.199182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.199208 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.199228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.199261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.199286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.199304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.199345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.199358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.199400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.199459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.199490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.199512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.199544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.199570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.199588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.199721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.199861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.199901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.199915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.199941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.199954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.200033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.200059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.200073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.200086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.200100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.200112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.200193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.200219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.200233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.200246 | orchestrator | 2025-05-03 01:03:53.200259 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-03 01:03:53.200272 | orchestrator | Saturday 03 May 2025 01:00:47 +0000 (0:00:03.658) 0:01:38.367 ********** 2025-05-03 01:03:53.200285 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.200298 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.200310 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.200322 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.200335 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.200347 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.200359 | orchestrator | 2025-05-03 01:03:53.200372 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-03 01:03:53.200385 | orchestrator | Saturday 03 May 2025 01:00:49 +0000 (0:00:02.423) 0:01:40.790 ********** 2025-05-03 01:03:53.200397 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.200414 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.200427 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.200439 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.200451 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.200464 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.200476 | orchestrator | 2025-05-03 01:03:53.200488 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-03 01:03:53.200501 | orchestrator | Saturday 03 May 2025 01:00:51 +0000 (0:00:02.033) 0:01:42.824 ********** 2025-05-03 01:03:53.200513 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.200525 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.200538 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.200550 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.200562 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.200575 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.200587 | orchestrator | 2025-05-03 01:03:53.200599 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-03 01:03:53.200612 | orchestrator | Saturday 03 May 2025 01:00:54 +0000 (0:00:03.021) 0:01:45.845 ********** 2025-05-03 01:03:53.200624 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.200636 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.200649 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.200667 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.200696 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.200709 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.200721 | orchestrator | 2025-05-03 01:03:53.200734 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-03 01:03:53.200747 | orchestrator | Saturday 03 May 2025 01:00:56 +0000 (0:00:02.187) 0:01:48.033 ********** 2025-05-03 01:03:53.200759 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.200772 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.200784 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.200796 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.200809 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.200824 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.200837 | orchestrator | 2025-05-03 01:03:53.200851 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-03 01:03:53.200866 | orchestrator | Saturday 03 May 2025 01:00:58 +0000 (0:00:02.135) 0:01:50.169 ********** 2025-05-03 01:03:53.200881 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.200895 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.200910 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.200924 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.201002 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.201021 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.201034 | orchestrator | 2025-05-03 01:03:53.201049 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-03 01:03:53.201068 | orchestrator | Saturday 03 May 2025 01:01:01 +0000 (0:00:02.693) 0:01:52.863 ********** 2025-05-03 01:03:53.201083 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-03 01:03:53.201097 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.201112 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-03 01:03:53.201127 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.201142 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-03 01:03:53.201157 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.201172 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-03 01:03:53.201184 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.201197 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-03 01:03:53.201209 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.201222 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-03 01:03:53.201234 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.201247 | orchestrator | 2025-05-03 01:03:53.201259 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-03 01:03:53.201272 | orchestrator | Saturday 03 May 2025 01:01:03 +0000 (0:00:01.987) 0:01:54.851 ********** 2025-05-03 01:03:53.201285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.201319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.201439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.201473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.201496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.201589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.201616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.201636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.201673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.201805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201824 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.201837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.201869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.201970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.201988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.202063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.202090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.202180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.202212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.202230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.202261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.202318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202332 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.202344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.202360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.202463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.202491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.202501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.202535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.202608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.202624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.202655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.202667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202690 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.202750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.202772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.202872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.202903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.202914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.202944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.202955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.203011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.203032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.203063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.203074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203084 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.203140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.203161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.203213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.203290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.203301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.203331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.203353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.203414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.203449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.203460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203471 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.203482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.203547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.203604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.203693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.203709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.203739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.203761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.203777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.203860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.203871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.203881 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.203892 | orchestrator | 2025-05-03 01:03:53.203902 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-03 01:03:53.203913 | orchestrator | Saturday 03 May 2025 01:01:05 +0000 (0:00:01.998) 0:01:56.849 ********** 2025-05-03 01:03:53.203923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.203994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.204043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.204072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.204138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.204154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.204255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.204292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.204303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.204330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.204385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.204400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.204449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.204494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.204519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.204537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.204548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.204660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.204717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204731 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.204742 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.204753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.204769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.204880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.204908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.204918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.204981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.204994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.205013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.205091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.205104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205113 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.205122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.205136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.205220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.205318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.205340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.205382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.205432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205444 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.205454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.205468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.205553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.205649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.205687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.205723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.205750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205760 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.205769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.205783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.205841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.205892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.205937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.205946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.205965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.205974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.205983 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.205992 | orchestrator | 2025-05-03 01:03:53.206000 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-03 01:03:53.206009 | orchestrator | Saturday 03 May 2025 01:01:07 +0000 (0:00:02.229) 0:01:59.078 ********** 2025-05-03 01:03:53.206034 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206049 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206058 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206067 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206076 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.206091 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.206101 | orchestrator | 2025-05-03 01:03:53.206127 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-03 01:03:53.206137 | orchestrator | Saturday 03 May 2025 01:01:10 +0000 (0:00:03.006) 0:02:02.085 ********** 2025-05-03 01:03:53.206146 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206155 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206164 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206172 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:03:53.206181 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:03:53.206190 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:03:53.206198 | orchestrator | 2025-05-03 01:03:53.206207 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-03 01:03:53.206215 | orchestrator | Saturday 03 May 2025 01:01:16 +0000 (0:00:05.611) 0:02:07.696 ********** 2025-05-03 01:03:53.206224 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206233 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206241 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.206250 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206258 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.206267 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206275 | orchestrator | 2025-05-03 01:03:53.206284 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-03 01:03:53.206294 | orchestrator | Saturday 03 May 2025 01:01:18 +0000 (0:00:01.821) 0:02:09.517 ********** 2025-05-03 01:03:53.206304 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206314 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206324 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206334 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.206344 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206354 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.206364 | orchestrator | 2025-05-03 01:03:53.206373 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-03 01:03:53.206384 | orchestrator | Saturday 03 May 2025 01:01:20 +0000 (0:00:01.974) 0:02:11.492 ********** 2025-05-03 01:03:53.206393 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206403 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.206414 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206423 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.206434 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206444 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206453 | orchestrator | 2025-05-03 01:03:53.206463 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-03 01:03:53.206473 | orchestrator | Saturday 03 May 2025 01:01:23 +0000 (0:00:03.210) 0:02:14.703 ********** 2025-05-03 01:03:53.206482 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206492 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206501 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206511 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.206521 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206530 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.206540 | orchestrator | 2025-05-03 01:03:53.206549 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-03 01:03:53.206559 | orchestrator | Saturday 03 May 2025 01:01:26 +0000 (0:00:02.607) 0:02:17.311 ********** 2025-05-03 01:03:53.206569 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206579 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206589 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206598 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.206613 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206623 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.206633 | orchestrator | 2025-05-03 01:03:53.206643 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-03 01:03:53.206651 | orchestrator | Saturday 03 May 2025 01:01:27 +0000 (0:00:01.912) 0:02:19.223 ********** 2025-05-03 01:03:53.206660 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206669 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.206690 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206699 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206708 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.206716 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206725 | orchestrator | 2025-05-03 01:03:53.206734 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-03 01:03:53.206742 | orchestrator | Saturday 03 May 2025 01:01:31 +0000 (0:00:03.670) 0:02:22.894 ********** 2025-05-03 01:03:53.206751 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.206759 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206768 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206776 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.206785 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206793 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.206802 | orchestrator | 2025-05-03 01:03:53.206810 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-03 01:03:53.206819 | orchestrator | Saturday 03 May 2025 01:01:34 +0000 (0:00:03.040) 0:02:25.934 ********** 2025-05-03 01:03:53.206827 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.206842 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.206850 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.206859 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.207018 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.207028 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.207037 | orchestrator | 2025-05-03 01:03:53.207045 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-03 01:03:53.207054 | orchestrator | Saturday 03 May 2025 01:01:37 +0000 (0:00:02.404) 0:02:28.338 ********** 2025-05-03 01:03:53.207063 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-03 01:03:53.207072 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.207081 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-03 01:03:53.207089 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.207098 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-03 01:03:53.207107 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.207134 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-03 01:03:53.207144 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.207156 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-03 01:03:53.207165 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.207174 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-03 01:03:53.207182 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.207191 | orchestrator | 2025-05-03 01:03:53.207199 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-03 01:03:53.207208 | orchestrator | Saturday 03 May 2025 01:01:39 +0000 (0:00:02.062) 0:02:30.401 ********** 2025-05-03 01:03:53.207218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.207233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.207251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.207309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.207383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.207429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.207474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.207516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.207550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.207559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.207568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207615 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.207624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.207634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.207643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207652 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.207710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.207728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.207782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.207833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.207873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.207883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.207903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.207912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207921 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.207930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.207960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.207988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.207997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.208064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.208087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.208132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.208141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208159 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.208169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.208207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.208244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.208306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.208327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.208370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.208379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208387 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.208396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.208408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.208458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.208520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.208541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.208583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.208591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208599 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.208608 | orchestrator | 2025-05-03 01:03:53.208616 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-03 01:03:53.208631 | orchestrator | Saturday 03 May 2025 01:01:43 +0000 (0:00:03.879) 0:02:34.280 ********** 2025-05-03 01:03:53.208640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.208648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.208712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.208772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-03 01:03:53.208813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.208835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.208873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.208919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.208952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.208969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.208990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.209010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.209063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.209147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.209197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.209206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.209255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-03 01:03:53.209288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-03 01:03:53.209322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.209390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.209424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.209452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.209475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.209501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.209534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.209551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.209583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-03 01:03:53.209609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:03:53.209629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:03:53.209641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-03 01:03:53.209659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-03 01:03:53.209667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-03 01:03:53.209686 | orchestrator | 2025-05-03 01:03:53.209695 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-03 01:03:53.209710 | orchestrator | Saturday 03 May 2025 01:01:46 +0000 (0:00:03.196) 0:02:37.477 ********** 2025-05-03 01:03:53.209718 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:03:53.209726 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:03:53.209734 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:03:53.209742 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:03:53.209750 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:03:53.209758 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:03:53.209766 | orchestrator | 2025-05-03 01:03:53.209775 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-03 01:03:53.209783 | orchestrator | Saturday 03 May 2025 01:01:47 +0000 (0:00:00.946) 0:02:38.424 ********** 2025-05-03 01:03:53.209791 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:03:53.209799 | orchestrator | 2025-05-03 01:03:53.209807 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-03 01:03:53.209815 | orchestrator | Saturday 03 May 2025 01:01:49 +0000 (0:00:02.372) 0:02:40.797 ********** 2025-05-03 01:03:53.209823 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:03:53.209831 | orchestrator | 2025-05-03 01:03:53.209839 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-03 01:03:53.209847 | orchestrator | Saturday 03 May 2025 01:01:51 +0000 (0:00:02.028) 0:02:42.825 ********** 2025-05-03 01:03:53.209855 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:03:53.209863 | orchestrator | 2025-05-03 01:03:53.209871 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-03 01:03:53.209879 | orchestrator | Saturday 03 May 2025 01:02:31 +0000 (0:00:39.562) 0:03:22.388 ********** 2025-05-03 01:03:53.209887 | orchestrator | 2025-05-03 01:03:53.209895 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-03 01:03:53.209903 | orchestrator | Saturday 03 May 2025 01:02:31 +0000 (0:00:00.067) 0:03:22.455 ********** 2025-05-03 01:03:53.209911 | orchestrator | 2025-05-03 01:03:53.209922 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-03 01:03:56.231550 | orchestrator | Saturday 03 May 2025 01:02:31 +0000 (0:00:00.325) 0:03:22.781 ********** 2025-05-03 01:03:56.231643 | orchestrator | 2025-05-03 01:03:56.231656 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-03 01:03:56.231665 | orchestrator | Saturday 03 May 2025 01:02:31 +0000 (0:00:00.059) 0:03:22.840 ********** 2025-05-03 01:03:56.231706 | orchestrator | 2025-05-03 01:03:56.231716 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-03 01:03:56.231724 | orchestrator | Saturday 03 May 2025 01:02:31 +0000 (0:00:00.055) 0:03:22.896 ********** 2025-05-03 01:03:56.231733 | orchestrator | 2025-05-03 01:03:56.231742 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-03 01:03:56.231750 | orchestrator | Saturday 03 May 2025 01:02:31 +0000 (0:00:00.055) 0:03:22.951 ********** 2025-05-03 01:03:56.231759 | orchestrator | 2025-05-03 01:03:56.231767 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-03 01:03:56.231776 | orchestrator | Saturday 03 May 2025 01:02:32 +0000 (0:00:00.311) 0:03:23.263 ********** 2025-05-03 01:03:56.231784 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:03:56.231794 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:03:56.231803 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:03:56.231812 | orchestrator | 2025-05-03 01:03:56.231820 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-03 01:03:56.231829 | orchestrator | Saturday 03 May 2025 01:02:59 +0000 (0:00:27.727) 0:03:50.990 ********** 2025-05-03 01:03:56.231837 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:03:56.231846 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:03:56.231856 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:03:56.231864 | orchestrator | 2025-05-03 01:03:56.231873 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:03:56.231883 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-03 01:03:56.231915 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-03 01:03:56.231924 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-03 01:03:56.231933 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-03 01:03:56.231941 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-03 01:03:56.231950 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-03 01:03:56.231959 | orchestrator | 2025-05-03 01:03:56.231967 | orchestrator | 2025-05-03 01:03:56.231976 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:03:56.231984 | orchestrator | Saturday 03 May 2025 01:03:50 +0000 (0:00:50.828) 0:04:41.819 ********** 2025-05-03 01:03:56.231993 | orchestrator | =============================================================================== 2025-05-03 01:03:56.232002 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.83s 2025-05-03 01:03:56.232010 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.56s 2025-05-03 01:03:56.232019 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.73s 2025-05-03 01:03:56.232068 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.15s 2025-05-03 01:03:56.232079 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.82s 2025-05-03 01:03:56.232100 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.22s 2025-05-03 01:03:56.232109 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.80s 2025-05-03 01:03:56.232118 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.61s 2025-05-03 01:03:56.232126 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.38s 2025-05-03 01:03:56.232135 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.34s 2025-05-03 01:03:56.232143 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.12s 2025-05-03 01:03:56.232152 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.37s 2025-05-03 01:03:56.232161 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.30s 2025-05-03 01:03:56.232170 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.88s 2025-05-03 01:03:56.232178 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.83s 2025-05-03 01:03:56.232187 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.81s 2025-05-03 01:03:56.232195 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.77s 2025-05-03 01:03:56.232204 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.72s 2025-05-03 01:03:56.232212 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.67s 2025-05-03 01:03:56.232221 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.66s 2025-05-03 01:03:56.232241 | orchestrator | 2025-05-03 01:03:56 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:56.236709 | orchestrator | 2025-05-03 01:03:56 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:56.236764 | orchestrator | 2025-05-03 01:03:56 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:56.237537 | orchestrator | 2025-05-03 01:03:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:56.238308 | orchestrator | 2025-05-03 01:03:56 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:03:59.278224 | orchestrator | 2025-05-03 01:03:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:03:59.278367 | orchestrator | 2025-05-03 01:03:59 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:03:59.278599 | orchestrator | 2025-05-03 01:03:59 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:03:59.279364 | orchestrator | 2025-05-03 01:03:59 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:03:59.280694 | orchestrator | 2025-05-03 01:03:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:03:59.281341 | orchestrator | 2025-05-03 01:03:59 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:02.324093 | orchestrator | 2025-05-03 01:03:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:02.324238 | orchestrator | 2025-05-03 01:04:02 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:02.324527 | orchestrator | 2025-05-03 01:04:02 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:02.325454 | orchestrator | 2025-05-03 01:04:02 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:04:02.326649 | orchestrator | 2025-05-03 01:04:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:02.327799 | orchestrator | 2025-05-03 01:04:02 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:05.372905 | orchestrator | 2025-05-03 01:04:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:05.373039 | orchestrator | 2025-05-03 01:04:05 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:05.373871 | orchestrator | 2025-05-03 01:04:05 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:05.373906 | orchestrator | 2025-05-03 01:04:05 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:04:05.373928 | orchestrator | 2025-05-03 01:04:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:05.374459 | orchestrator | 2025-05-03 01:04:05 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:05.374573 | orchestrator | 2025-05-03 01:04:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:08.403278 | orchestrator | 2025-05-03 01:04:08 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:08.403488 | orchestrator | 2025-05-03 01:04:08 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:08.404270 | orchestrator | 2025-05-03 01:04:08 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:04:08.407817 | orchestrator | 2025-05-03 01:04:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:08.408419 | orchestrator | 2025-05-03 01:04:08 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:11.455033 | orchestrator | 2025-05-03 01:04:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:11.455169 | orchestrator | 2025-05-03 01:04:11 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:11.456018 | orchestrator | 2025-05-03 01:04:11 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:11.456492 | orchestrator | 2025-05-03 01:04:11 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:04:11.457119 | orchestrator | 2025-05-03 01:04:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:11.457757 | orchestrator | 2025-05-03 01:04:11 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:14.500325 | orchestrator | 2025-05-03 01:04:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:14.500499 | orchestrator | 2025-05-03 01:04:14 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:14.501297 | orchestrator | 2025-05-03 01:04:14 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:14.502840 | orchestrator | 2025-05-03 01:04:14 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state STARTED 2025-05-03 01:04:14.503884 | orchestrator | 2025-05-03 01:04:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:14.505449 | orchestrator | 2025-05-03 01:04:14 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:17.562737 | orchestrator | 2025-05-03 01:04:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:17.562908 | orchestrator | 2025-05-03 01:04:17 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:17.563483 | orchestrator | 2025-05-03 01:04:17 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:17.564214 | orchestrator | 2025-05-03 01:04:17 | INFO  | Task 4f1f21dd-df2a-4694-b6ce-69f68f9dfe0f is in state SUCCESS 2025-05-03 01:04:17.566679 | orchestrator | 2025-05-03 01:04:17.566721 | orchestrator | 2025-05-03 01:04:17.566827 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:04:17.566845 | orchestrator | 2025-05-03 01:04:17.566860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:04:17.566876 | orchestrator | Saturday 03 May 2025 01:02:15 +0000 (0:00:00.370) 0:00:00.370 ********** 2025-05-03 01:04:17.566892 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:04:17.566909 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:04:17.566923 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:04:17.566938 | orchestrator | 2025-05-03 01:04:17.566952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:04:17.567157 | orchestrator | Saturday 03 May 2025 01:02:16 +0000 (0:00:00.422) 0:00:00.792 ********** 2025-05-03 01:04:17.567180 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-03 01:04:17.567196 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-03 01:04:17.567211 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-03 01:04:17.567226 | orchestrator | 2025-05-03 01:04:17.567241 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-03 01:04:17.567256 | orchestrator | 2025-05-03 01:04:17.567271 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-03 01:04:17.567286 | orchestrator | Saturday 03 May 2025 01:02:16 +0000 (0:00:00.321) 0:00:01.114 ********** 2025-05-03 01:04:17.567301 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:04:17.567318 | orchestrator | 2025-05-03 01:04:17.567333 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-03 01:04:17.567348 | orchestrator | Saturday 03 May 2025 01:02:17 +0000 (0:00:00.801) 0:00:01.915 ********** 2025-05-03 01:04:17.567364 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-03 01:04:17.567379 | orchestrator | 2025-05-03 01:04:17.567472 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-03 01:04:17.567517 | orchestrator | Saturday 03 May 2025 01:02:20 +0000 (0:00:03.357) 0:00:05.272 ********** 2025-05-03 01:04:17.567532 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-03 01:04:17.567547 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-03 01:04:17.567562 | orchestrator | 2025-05-03 01:04:17.567577 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-03 01:04:17.567592 | orchestrator | Saturday 03 May 2025 01:02:27 +0000 (0:00:06.415) 0:00:11.688 ********** 2025-05-03 01:04:17.567607 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:04:17.567622 | orchestrator | 2025-05-03 01:04:17.567637 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-03 01:04:17.567684 | orchestrator | Saturday 03 May 2025 01:02:30 +0000 (0:00:03.374) 0:00:15.063 ********** 2025-05-03 01:04:17.567701 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:04:17.567716 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-03 01:04:17.567748 | orchestrator | 2025-05-03 01:04:17.567763 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-03 01:04:17.567778 | orchestrator | Saturday 03 May 2025 01:02:34 +0000 (0:00:04.042) 0:00:19.105 ********** 2025-05-03 01:04:17.567793 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:04:17.567808 | orchestrator | 2025-05-03 01:04:17.567823 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-03 01:04:17.567838 | orchestrator | Saturday 03 May 2025 01:02:37 +0000 (0:00:03.407) 0:00:22.513 ********** 2025-05-03 01:04:17.567853 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-03 01:04:17.567868 | orchestrator | 2025-05-03 01:04:17.567882 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-03 01:04:17.567897 | orchestrator | Saturday 03 May 2025 01:02:42 +0000 (0:00:04.228) 0:00:26.741 ********** 2025-05-03 01:04:17.567912 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:04:17.567927 | orchestrator | 2025-05-03 01:04:17.567942 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-03 01:04:17.567956 | orchestrator | Saturday 03 May 2025 01:02:45 +0000 (0:00:03.378) 0:00:30.119 ********** 2025-05-03 01:04:17.567971 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:04:17.567986 | orchestrator | 2025-05-03 01:04:17.568000 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-03 01:04:17.568015 | orchestrator | Saturday 03 May 2025 01:02:49 +0000 (0:00:04.140) 0:00:34.260 ********** 2025-05-03 01:04:17.568030 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:04:17.568045 | orchestrator | 2025-05-03 01:04:17.568060 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-03 01:04:17.568074 | orchestrator | Saturday 03 May 2025 01:02:53 +0000 (0:00:03.690) 0:00:37.950 ********** 2025-05-03 01:04:17.568107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.568132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.568159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.568179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.568198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.568232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.568256 | orchestrator | 2025-05-03 01:04:17.568272 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-03 01:04:17.568288 | orchestrator | Saturday 03 May 2025 01:02:55 +0000 (0:00:02.058) 0:00:40.008 ********** 2025-05-03 01:04:17.568305 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:04:17.568321 | orchestrator | 2025-05-03 01:04:17.568337 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-03 01:04:17.568489 | orchestrator | Saturday 03 May 2025 01:02:55 +0000 (0:00:00.147) 0:00:40.156 ********** 2025-05-03 01:04:17.568508 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:04:17.568523 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:04:17.568537 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:04:17.568551 | orchestrator | 2025-05-03 01:04:17.568565 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-03 01:04:17.568579 | orchestrator | Saturday 03 May 2025 01:02:55 +0000 (0:00:00.485) 0:00:40.641 ********** 2025-05-03 01:04:17.568593 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 01:04:17.568607 | orchestrator | 2025-05-03 01:04:17.568620 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-03 01:04:17.568634 | orchestrator | Saturday 03 May 2025 01:02:56 +0000 (0:00:00.545) 0:00:41.187 ********** 2025-05-03 01:04:17.568649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.568684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.568700 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:04:17.568715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.568751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.568767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.568782 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:04:17.568796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.568811 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:04:17.568825 | orchestrator | 2025-05-03 01:04:17.568839 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-03 01:04:17.568853 | orchestrator | Saturday 03 May 2025 01:02:57 +0000 (0:00:00.939) 0:00:42.126 ********** 2025-05-03 01:04:17.568867 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:04:17.568881 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:04:17.568895 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:04:17.568908 | orchestrator | 2025-05-03 01:04:17.568922 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-03 01:04:17.568936 | orchestrator | Saturday 03 May 2025 01:02:57 +0000 (0:00:00.305) 0:00:42.432 ********** 2025-05-03 01:04:17.568950 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:04:17.568965 | orchestrator | 2025-05-03 01:04:17.568978 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-03 01:04:17.568992 | orchestrator | Saturday 03 May 2025 01:02:58 +0000 (0:00:00.782) 0:00:43.214 ********** 2025-05-03 01:04:17.569006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.569119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.569138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.569161 | orchestrator | 2025-05-03 01:04:17.569177 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-03 01:04:17.569193 | orchestrator | Saturday 03 May 2025 01:03:02 +0000 (0:00:04.216) 0:00:47.431 ********** 2025-05-03 01:04:17.569229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.569247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.569264 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:04:17.569282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.569298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.569321 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:04:17.569360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.569385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.569403 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:04:17.569419 | orchestrator | 2025-05-03 01:04:17.569434 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-03 01:04:17.569450 | orchestrator | Saturday 03 May 2025 01:03:04 +0000 (0:00:02.116) 0:00:49.547 ********** 2025-05-03 01:04:17.569466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.569483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.569505 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:04:17.569520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.569549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.569565 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:04:17.569580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.569595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.569609 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:04:17.569623 | orchestrator | 2025-05-03 01:04:17.569637 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-03 01:04:17.569703 | orchestrator | Saturday 03 May 2025 01:03:08 +0000 (0:00:03.144) 0:00:52.692 ********** 2025-05-03 01:04:17.569720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.569815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.569846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.569861 | orchestrator | 2025-05-03 01:04:17.569876 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-03 01:04:17.569890 | orchestrator | Saturday 03 May 2025 01:03:11 +0000 (0:00:03.664) 0:00:56.356 ********** 2025-05-03 01:04:17.569904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.569965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.569987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.570001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.570062 | orchestrator | 2025-05-03 01:04:17.570079 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-03 01:04:17.570098 | orchestrator | Saturday 03 May 2025 01:03:23 +0000 (0:00:11.651) 0:01:08.008 ********** 2025-05-03 01:04:17.570112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.570135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.570159 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:04:17.570172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.570185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.570198 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:04:17.570218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-03 01:04:17.570231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:04:17.570245 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:04:17.570257 | orchestrator | 2025-05-03 01:04:17.570269 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-03 01:04:17.570282 | orchestrator | Saturday 03 May 2025 01:03:23 +0000 (0:00:00.615) 0:01:08.623 ********** 2025-05-03 01:04:17.570303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.570323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.570337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-03 01:04:17.570356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.570377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.570401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:04:17.570414 | orchestrator | 2025-05-03 01:04:17.570427 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-03 01:04:17.570439 | orchestrator | Saturday 03 May 2025 01:03:26 +0000 (0:00:02.124) 0:01:10.748 ********** 2025-05-03 01:04:17.570452 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:04:17.570464 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:04:17.570477 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:04:17.570489 | orchestrator | 2025-05-03 01:04:17.570501 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-03 01:04:17.570513 | orchestrator | Saturday 03 May 2025 01:03:26 +0000 (0:00:00.237) 0:01:10.986 ********** 2025-05-03 01:04:17.570526 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:04:17.570538 | orchestrator | 2025-05-03 01:04:17.570550 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-03 01:04:17.570562 | orchestrator | Saturday 03 May 2025 01:03:28 +0000 (0:00:02.342) 0:01:13.329 ********** 2025-05-03 01:04:17.570575 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:04:17.570587 | orchestrator | 2025-05-03 01:04:17.570599 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-03 01:04:17.570611 | orchestrator | Saturday 03 May 2025 01:03:30 +0000 (0:00:02.279) 0:01:15.608 ********** 2025-05-03 01:04:17.570624 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:04:17.570636 | orchestrator | 2025-05-03 01:04:17.570648 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-03 01:04:17.570679 | orchestrator | Saturday 03 May 2025 01:03:44 +0000 (0:00:13.539) 0:01:29.148 ********** 2025-05-03 01:04:17.570691 | orchestrator | 2025-05-03 01:04:17.570703 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-03 01:04:17.570716 | orchestrator | Saturday 03 May 2025 01:03:44 +0000 (0:00:00.064) 0:01:29.212 ********** 2025-05-03 01:04:17.570728 | orchestrator | 2025-05-03 01:04:17.570740 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-03 01:04:17.570752 | orchestrator | Saturday 03 May 2025 01:03:44 +0000 (0:00:00.237) 0:01:29.450 ********** 2025-05-03 01:04:17.570764 | orchestrator | 2025-05-03 01:04:17.570777 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-03 01:04:17.570789 | orchestrator | Saturday 03 May 2025 01:03:44 +0000 (0:00:00.083) 0:01:29.533 ********** 2025-05-03 01:04:17.570801 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:04:17.570813 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:04:17.570826 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:04:17.570838 | orchestrator | 2025-05-03 01:04:17.570850 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-03 01:04:17.570863 | orchestrator | Saturday 03 May 2025 01:04:03 +0000 (0:00:18.361) 0:01:47.895 ********** 2025-05-03 01:04:17.570875 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:04:17.570887 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:04:17.570900 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:04:17.570912 | orchestrator | 2025-05-03 01:04:17.570924 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:04:17.570949 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-03 01:04:20.610338 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:04:20.610456 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:04:20.610475 | orchestrator | 2025-05-03 01:04:20.610490 | orchestrator | 2025-05-03 01:04:20.610505 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:04:20.610520 | orchestrator | Saturday 03 May 2025 01:04:15 +0000 (0:00:11.982) 0:01:59.877 ********** 2025-05-03 01:04:20.610535 | orchestrator | =============================================================================== 2025-05-03 01:04:20.610549 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.36s 2025-05-03 01:04:20.610562 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 13.54s 2025-05-03 01:04:20.610576 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.98s 2025-05-03 01:04:20.610590 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 11.65s 2025-05-03 01:04:20.610623 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.42s 2025-05-03 01:04:20.610637 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.23s 2025-05-03 01:04:20.610701 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 4.22s 2025-05-03 01:04:20.610717 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.14s 2025-05-03 01:04:20.610731 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.04s 2025-05-03 01:04:20.610744 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.69s 2025-05-03 01:04:20.610759 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.66s 2025-05-03 01:04:20.610773 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.41s 2025-05-03 01:04:20.610787 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.38s 2025-05-03 01:04:20.610800 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.37s 2025-05-03 01:04:20.610814 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.36s 2025-05-03 01:04:20.610828 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 3.14s 2025-05-03 01:04:20.610842 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.34s 2025-05-03 01:04:20.610855 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.28s 2025-05-03 01:04:20.610870 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.12s 2025-05-03 01:04:20.610886 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 2.12s 2025-05-03 01:04:20.610903 | orchestrator | 2025-05-03 01:04:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:20.610920 | orchestrator | 2025-05-03 01:04:17 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:20.610936 | orchestrator | 2025-05-03 01:04:17 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:20.610952 | orchestrator | 2025-05-03 01:04:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:20.610984 | orchestrator | 2025-05-03 01:04:20 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:20.613487 | orchestrator | 2025-05-03 01:04:20 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:23.643918 | orchestrator | 2025-05-03 01:04:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:23.644039 | orchestrator | 2025-05-03 01:04:20 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:23.644150 | orchestrator | 2025-05-03 01:04:20 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:23.644175 | orchestrator | 2025-05-03 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:23.644208 | orchestrator | 2025-05-03 01:04:23 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:23.645032 | orchestrator | 2025-05-03 01:04:23 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:23.645067 | orchestrator | 2025-05-03 01:04:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:23.646827 | orchestrator | 2025-05-03 01:04:23 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state STARTED 2025-05-03 01:04:23.647613 | orchestrator | 2025-05-03 01:04:23 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:26.686194 | orchestrator | 2025-05-03 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:26.686371 | orchestrator | 2025-05-03 01:04:26 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:26.686870 | orchestrator | 2025-05-03 01:04:26 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:26.686997 | orchestrator | 2025-05-03 01:04:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:26.687879 | orchestrator | 2025-05-03 01:04:26 | INFO  | Task 0de5d6c3-1f65-42ed-9823-3059d906c153 is in state SUCCESS 2025-05-03 01:04:26.690123 | orchestrator | 2025-05-03 01:04:26 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:29.730278 | orchestrator | 2025-05-03 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:29.730684 | orchestrator | 2025-05-03 01:04:29 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:29.730741 | orchestrator | 2025-05-03 01:04:29 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:29.732277 | orchestrator | 2025-05-03 01:04:29 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:29.734066 | orchestrator | 2025-05-03 01:04:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:29.735473 | orchestrator | 2025-05-03 01:04:29 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:32.779582 | orchestrator | 2025-05-03 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:32.779781 | orchestrator | 2025-05-03 01:04:32 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:32.781328 | orchestrator | 2025-05-03 01:04:32 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:32.783198 | orchestrator | 2025-05-03 01:04:32 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:32.784595 | orchestrator | 2025-05-03 01:04:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:32.786324 | orchestrator | 2025-05-03 01:04:32 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:32.786712 | orchestrator | 2025-05-03 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:35.858163 | orchestrator | 2025-05-03 01:04:35 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:35.860192 | orchestrator | 2025-05-03 01:04:35 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:35.862847 | orchestrator | 2025-05-03 01:04:35 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:35.865144 | orchestrator | 2025-05-03 01:04:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:35.866694 | orchestrator | 2025-05-03 01:04:35 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:35.867208 | orchestrator | 2025-05-03 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:38.924757 | orchestrator | 2025-05-03 01:04:38 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:38.925383 | orchestrator | 2025-05-03 01:04:38 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:38.925429 | orchestrator | 2025-05-03 01:04:38 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:38.925873 | orchestrator | 2025-05-03 01:04:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:38.926919 | orchestrator | 2025-05-03 01:04:38 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:41.970837 | orchestrator | 2025-05-03 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:41.971053 | orchestrator | 2025-05-03 01:04:41 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:41.972385 | orchestrator | 2025-05-03 01:04:41 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:41.975717 | orchestrator | 2025-05-03 01:04:41 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:41.977290 | orchestrator | 2025-05-03 01:04:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:41.978952 | orchestrator | 2025-05-03 01:04:41 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:41.979114 | orchestrator | 2025-05-03 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:45.029996 | orchestrator | 2025-05-03 01:04:45 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:45.030776 | orchestrator | 2025-05-03 01:04:45 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:45.033364 | orchestrator | 2025-05-03 01:04:45 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:45.036263 | orchestrator | 2025-05-03 01:04:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:45.036320 | orchestrator | 2025-05-03 01:04:45 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:45.036804 | orchestrator | 2025-05-03 01:04:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:48.078954 | orchestrator | 2025-05-03 01:04:48 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:48.081431 | orchestrator | 2025-05-03 01:04:48 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:48.082252 | orchestrator | 2025-05-03 01:04:48 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:48.083680 | orchestrator | 2025-05-03 01:04:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:48.085743 | orchestrator | 2025-05-03 01:04:48 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:51.133880 | orchestrator | 2025-05-03 01:04:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:51.134002 | orchestrator | 2025-05-03 01:04:51 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:51.134768 | orchestrator | 2025-05-03 01:04:51 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:51.135664 | orchestrator | 2025-05-03 01:04:51 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:51.136287 | orchestrator | 2025-05-03 01:04:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:51.138974 | orchestrator | 2025-05-03 01:04:51 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:51.139329 | orchestrator | 2025-05-03 01:04:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:54.173887 | orchestrator | 2025-05-03 01:04:54 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:54.175746 | orchestrator | 2025-05-03 01:04:54 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:54.176854 | orchestrator | 2025-05-03 01:04:54 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:54.178777 | orchestrator | 2025-05-03 01:04:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:54.182580 | orchestrator | 2025-05-03 01:04:54 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:04:57.209860 | orchestrator | 2025-05-03 01:04:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:04:57.210370 | orchestrator | 2025-05-03 01:04:57 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:04:57.210702 | orchestrator | 2025-05-03 01:04:57 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:04:57.210741 | orchestrator | 2025-05-03 01:04:57 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:04:57.211177 | orchestrator | 2025-05-03 01:04:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:04:57.211743 | orchestrator | 2025-05-03 01:04:57 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:00.253857 | orchestrator | 2025-05-03 01:04:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:00.253978 | orchestrator | 2025-05-03 01:05:00 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:05:00.255016 | orchestrator | 2025-05-03 01:05:00 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:00.255580 | orchestrator | 2025-05-03 01:05:00 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:00.261669 | orchestrator | 2025-05-03 01:05:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:00.261958 | orchestrator | 2025-05-03 01:05:00 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:00.263117 | orchestrator | 2025-05-03 01:05:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:03.282307 | orchestrator | 2025-05-03 01:05:03 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:05:03.283050 | orchestrator | 2025-05-03 01:05:03 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:03.283699 | orchestrator | 2025-05-03 01:05:03 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:03.284394 | orchestrator | 2025-05-03 01:05:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:03.285443 | orchestrator | 2025-05-03 01:05:03 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:06.310499 | orchestrator | 2025-05-03 01:05:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:06.310663 | orchestrator | 2025-05-03 01:05:06 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:05:06.311715 | orchestrator | 2025-05-03 01:05:06 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:06.312804 | orchestrator | 2025-05-03 01:05:06 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:06.313883 | orchestrator | 2025-05-03 01:05:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:06.313920 | orchestrator | 2025-05-03 01:05:06 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:09.353733 | orchestrator | 2025-05-03 01:05:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:09.353861 | orchestrator | 2025-05-03 01:05:09 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:05:09.354330 | orchestrator | 2025-05-03 01:05:09 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:09.354370 | orchestrator | 2025-05-03 01:05:09 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:09.354770 | orchestrator | 2025-05-03 01:05:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:09.355454 | orchestrator | 2025-05-03 01:05:09 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:12.399092 | orchestrator | 2025-05-03 01:05:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:12.399255 | orchestrator | 2025-05-03 01:05:12 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:05:12.399446 | orchestrator | 2025-05-03 01:05:12 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:12.400311 | orchestrator | 2025-05-03 01:05:12 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:12.400966 | orchestrator | 2025-05-03 01:05:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:12.401908 | orchestrator | 2025-05-03 01:05:12 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:12.402287 | orchestrator | 2025-05-03 01:05:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:15.439131 | orchestrator | 2025-05-03 01:05:15 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:05:15.439344 | orchestrator | 2025-05-03 01:05:15 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:15.440200 | orchestrator | 2025-05-03 01:05:15 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:15.444152 | orchestrator | 2025-05-03 01:05:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:18.465458 | orchestrator | 2025-05-03 01:05:15 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:18.465678 | orchestrator | 2025-05-03 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:18.465716 | orchestrator | 2025-05-03 01:05:18 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state STARTED 2025-05-03 01:05:18.466975 | orchestrator | 2025-05-03 01:05:18 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:18.467499 | orchestrator | 2025-05-03 01:05:18 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:18.468778 | orchestrator | 2025-05-03 01:05:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:18.469906 | orchestrator | 2025-05-03 01:05:18 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:21.499198 | orchestrator | 2025-05-03 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:21.499320 | orchestrator | 2025-05-03 01:05:21.499341 | orchestrator | 2025-05-03 01:05:21.499356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:05:21.499370 | orchestrator | 2025-05-03 01:05:21.499384 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:05:21.499399 | orchestrator | Saturday 03 May 2025 01:03:53 +0000 (0:00:00.248) 0:00:00.248 ********** 2025-05-03 01:05:21.499413 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:05:21.499428 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:05:21.499441 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:05:21.499456 | orchestrator | ok: [testbed-manager] 2025-05-03 01:05:21.499469 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:05:21.499483 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:05:21.499497 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:05:21.499510 | orchestrator | 2025-05-03 01:05:21.499524 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:05:21.499538 | orchestrator | Saturday 03 May 2025 01:03:54 +0000 (0:00:00.694) 0:00:00.943 ********** 2025-05-03 01:05:21.499552 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-03 01:05:21.499567 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-03 01:05:21.499621 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-03 01:05:21.499637 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-03 01:05:21.499651 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-03 01:05:21.499786 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-03 01:05:21.499811 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-03 01:05:21.499901 | orchestrator | 2025-05-03 01:05:21.499922 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-03 01:05:21.499936 | orchestrator | 2025-05-03 01:05:21.499950 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-03 01:05:21.499964 | orchestrator | Saturday 03 May 2025 01:03:54 +0000 (0:00:00.730) 0:00:01.673 ********** 2025-05-03 01:05:21.499979 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:05:21.499993 | orchestrator | 2025-05-03 01:05:21.500007 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-03 01:05:21.500021 | orchestrator | Saturday 03 May 2025 01:03:56 +0000 (0:00:01.228) 0:00:02.902 ********** 2025-05-03 01:05:21.500035 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-03 01:05:21.500048 | orchestrator | 2025-05-03 01:05:21.500062 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-03 01:05:21.500075 | orchestrator | Saturday 03 May 2025 01:03:59 +0000 (0:00:03.663) 0:00:06.565 ********** 2025-05-03 01:05:21.500090 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-03 01:05:21.500105 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-03 01:05:21.500119 | orchestrator | 2025-05-03 01:05:21.500133 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-03 01:05:21.500165 | orchestrator | Saturday 03 May 2025 01:04:06 +0000 (0:00:06.523) 0:00:13.089 ********** 2025-05-03 01:05:21.500180 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:05:21.500194 | orchestrator | 2025-05-03 01:05:21.500208 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-03 01:05:21.500222 | orchestrator | Saturday 03 May 2025 01:04:09 +0000 (0:00:03.172) 0:00:16.261 ********** 2025-05-03 01:05:21.500235 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:05:21.500249 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-03 01:05:21.500263 | orchestrator | 2025-05-03 01:05:21.500283 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-03 01:05:21.500297 | orchestrator | Saturday 03 May 2025 01:04:13 +0000 (0:00:04.011) 0:00:20.273 ********** 2025-05-03 01:05:21.500320 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:05:21.500343 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-03 01:05:21.500364 | orchestrator | 2025-05-03 01:05:21.500408 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-03 01:05:21.500432 | orchestrator | Saturday 03 May 2025 01:04:19 +0000 (0:00:06.170) 0:00:26.444 ********** 2025-05-03 01:05:21.500454 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-03 01:05:21.500477 | orchestrator | 2025-05-03 01:05:21.500498 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:05:21.500529 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:05:21.500554 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:05:21.500719 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:05:21.500744 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:05:21.500759 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:05:21.500787 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:05:21.506773 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:05:21.506820 | orchestrator | 2025-05-03 01:05:21.506845 | orchestrator | 2025-05-03 01:05:21.506871 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:05:21.506895 | orchestrator | Saturday 03 May 2025 01:04:25 +0000 (0:00:05.872) 0:00:32.317 ********** 2025-05-03 01:05:21.506919 | orchestrator | =============================================================================== 2025-05-03 01:05:21.506943 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.52s 2025-05-03 01:05:21.506967 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.17s 2025-05-03 01:05:21.506990 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.87s 2025-05-03 01:05:21.507013 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.01s 2025-05-03 01:05:21.507037 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.66s 2025-05-03 01:05:21.507058 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.17s 2025-05-03 01:05:21.507088 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.23s 2025-05-03 01:05:21.507104 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-05-03 01:05:21.507129 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.70s 2025-05-03 01:05:21.507160 | orchestrator | 2025-05-03 01:05:21.507175 | orchestrator | 2025-05-03 01:05:21 | INFO  | Task f24c7e58-ec80-40f0-801a-e43c0bb11073 is in state SUCCESS 2025-05-03 01:05:21.507189 | orchestrator | 2025-05-03 01:05:21 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:21.507204 | orchestrator | 2025-05-03 01:05:21 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:21.507218 | orchestrator | 2025-05-03 01:05:21 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:21.507241 | orchestrator | 2025-05-03 01:05:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:21.508212 | orchestrator | 2025-05-03 01:05:21 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:21.509435 | orchestrator | 2025-05-03 01:05:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:24.545557 | orchestrator | 2025-05-03 01:05:24 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:24.545850 | orchestrator | 2025-05-03 01:05:24 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:24.546529 | orchestrator | 2025-05-03 01:05:24 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:24.546979 | orchestrator | 2025-05-03 01:05:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:24.547761 | orchestrator | 2025-05-03 01:05:24 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:27.594182 | orchestrator | 2025-05-03 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:27.594446 | orchestrator | 2025-05-03 01:05:27 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:27.595243 | orchestrator | 2025-05-03 01:05:27 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:27.595279 | orchestrator | 2025-05-03 01:05:27 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:27.595756 | orchestrator | 2025-05-03 01:05:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:27.596510 | orchestrator | 2025-05-03 01:05:27 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:30.634698 | orchestrator | 2025-05-03 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:30.634816 | orchestrator | 2025-05-03 01:05:30 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:30.635688 | orchestrator | 2025-05-03 01:05:30 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:30.636269 | orchestrator | 2025-05-03 01:05:30 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:30.636878 | orchestrator | 2025-05-03 01:05:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:30.638153 | orchestrator | 2025-05-03 01:05:30 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:33.678841 | orchestrator | 2025-05-03 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:33.679002 | orchestrator | 2025-05-03 01:05:33 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:33.679332 | orchestrator | 2025-05-03 01:05:33 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:33.679368 | orchestrator | 2025-05-03 01:05:33 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:33.680953 | orchestrator | 2025-05-03 01:05:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:33.681231 | orchestrator | 2025-05-03 01:05:33 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:33.683281 | orchestrator | 2025-05-03 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:36.715686 | orchestrator | 2025-05-03 01:05:36 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:36.715834 | orchestrator | 2025-05-03 01:05:36 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:36.716432 | orchestrator | 2025-05-03 01:05:36 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:36.717306 | orchestrator | 2025-05-03 01:05:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:36.717833 | orchestrator | 2025-05-03 01:05:36 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:39.751125 | orchestrator | 2025-05-03 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:39.751276 | orchestrator | 2025-05-03 01:05:39 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:39.751836 | orchestrator | 2025-05-03 01:05:39 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:39.751878 | orchestrator | 2025-05-03 01:05:39 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:39.753154 | orchestrator | 2025-05-03 01:05:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:42.779767 | orchestrator | 2025-05-03 01:05:39 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:42.779886 | orchestrator | 2025-05-03 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:42.779923 | orchestrator | 2025-05-03 01:05:42 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:42.782314 | orchestrator | 2025-05-03 01:05:42 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:42.783269 | orchestrator | 2025-05-03 01:05:42 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:42.783316 | orchestrator | 2025-05-03 01:05:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:42.783343 | orchestrator | 2025-05-03 01:05:42 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:45.809779 | orchestrator | 2025-05-03 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:45.809902 | orchestrator | 2025-05-03 01:05:45 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:45.810648 | orchestrator | 2025-05-03 01:05:45 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:45.810758 | orchestrator | 2025-05-03 01:05:45 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:45.811139 | orchestrator | 2025-05-03 01:05:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:45.815941 | orchestrator | 2025-05-03 01:05:45 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:48.839854 | orchestrator | 2025-05-03 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:48.839983 | orchestrator | 2025-05-03 01:05:48 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:48.840672 | orchestrator | 2025-05-03 01:05:48 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:48.841086 | orchestrator | 2025-05-03 01:05:48 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:48.841781 | orchestrator | 2025-05-03 01:05:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:48.842282 | orchestrator | 2025-05-03 01:05:48 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:51.871520 | orchestrator | 2025-05-03 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:51.871695 | orchestrator | 2025-05-03 01:05:51 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:51.872155 | orchestrator | 2025-05-03 01:05:51 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:51.872185 | orchestrator | 2025-05-03 01:05:51 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:51.872200 | orchestrator | 2025-05-03 01:05:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:51.872221 | orchestrator | 2025-05-03 01:05:51 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:54.902217 | orchestrator | 2025-05-03 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:54.902341 | orchestrator | 2025-05-03 01:05:54 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:54.902691 | orchestrator | 2025-05-03 01:05:54 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:54.903806 | orchestrator | 2025-05-03 01:05:54 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:54.904662 | orchestrator | 2025-05-03 01:05:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:54.905881 | orchestrator | 2025-05-03 01:05:54 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:05:57.941724 | orchestrator | 2025-05-03 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:05:57.941864 | orchestrator | 2025-05-03 01:05:57 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:05:57.943720 | orchestrator | 2025-05-03 01:05:57 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:05:57.945747 | orchestrator | 2025-05-03 01:05:57 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:05:57.946185 | orchestrator | 2025-05-03 01:05:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:05:57.950331 | orchestrator | 2025-05-03 01:05:57 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:00.976416 | orchestrator | 2025-05-03 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:00.976575 | orchestrator | 2025-05-03 01:06:00 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:00.978466 | orchestrator | 2025-05-03 01:06:00 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:00.978898 | orchestrator | 2025-05-03 01:06:00 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:00.979618 | orchestrator | 2025-05-03 01:06:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:00.980091 | orchestrator | 2025-05-03 01:06:00 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:04.009893 | orchestrator | 2025-05-03 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:04.010179 | orchestrator | 2025-05-03 01:06:04 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:04.011217 | orchestrator | 2025-05-03 01:06:04 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:04.011254 | orchestrator | 2025-05-03 01:06:04 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:04.011661 | orchestrator | 2025-05-03 01:06:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:04.011694 | orchestrator | 2025-05-03 01:06:04 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:07.064246 | orchestrator | 2025-05-03 01:06:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:07.064382 | orchestrator | 2025-05-03 01:06:07 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:07.071619 | orchestrator | 2025-05-03 01:06:07 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:07.071670 | orchestrator | 2025-05-03 01:06:07 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:07.076355 | orchestrator | 2025-05-03 01:06:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:07.079883 | orchestrator | 2025-05-03 01:06:07 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:10.115683 | orchestrator | 2025-05-03 01:06:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:10.115808 | orchestrator | 2025-05-03 01:06:10 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:10.116315 | orchestrator | 2025-05-03 01:06:10 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:10.118949 | orchestrator | 2025-05-03 01:06:10 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:10.126396 | orchestrator | 2025-05-03 01:06:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:10.129178 | orchestrator | 2025-05-03 01:06:10 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:13.163943 | orchestrator | 2025-05-03 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:13.164066 | orchestrator | 2025-05-03 01:06:13 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:13.164818 | orchestrator | 2025-05-03 01:06:13 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:13.164861 | orchestrator | 2025-05-03 01:06:13 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:13.168446 | orchestrator | 2025-05-03 01:06:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:16.195369 | orchestrator | 2025-05-03 01:06:13 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:16.195599 | orchestrator | 2025-05-03 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:16.195638 | orchestrator | 2025-05-03 01:06:16 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:16.195930 | orchestrator | 2025-05-03 01:06:16 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:16.195966 | orchestrator | 2025-05-03 01:06:16 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:16.196438 | orchestrator | 2025-05-03 01:06:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:16.196987 | orchestrator | 2025-05-03 01:06:16 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:19.249427 | orchestrator | 2025-05-03 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:19.249704 | orchestrator | 2025-05-03 01:06:19 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:19.250231 | orchestrator | 2025-05-03 01:06:19 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:19.250271 | orchestrator | 2025-05-03 01:06:19 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:19.250307 | orchestrator | 2025-05-03 01:06:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:19.250948 | orchestrator | 2025-05-03 01:06:19 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:22.288864 | orchestrator | 2025-05-03 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:22.288985 | orchestrator | 2025-05-03 01:06:22 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:22.290120 | orchestrator | 2025-05-03 01:06:22 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:22.290457 | orchestrator | 2025-05-03 01:06:22 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:22.291131 | orchestrator | 2025-05-03 01:06:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:22.291866 | orchestrator | 2025-05-03 01:06:22 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:25.336522 | orchestrator | 2025-05-03 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:25.336592 | orchestrator | 2025-05-03 01:06:25 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:25.339825 | orchestrator | 2025-05-03 01:06:25 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:25.341062 | orchestrator | 2025-05-03 01:06:25 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:25.344330 | orchestrator | 2025-05-03 01:06:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:25.345266 | orchestrator | 2025-05-03 01:06:25 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:28.381921 | orchestrator | 2025-05-03 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:28.382093 | orchestrator | 2025-05-03 01:06:28 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:28.383782 | orchestrator | 2025-05-03 01:06:28 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:28.383922 | orchestrator | 2025-05-03 01:06:28 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:28.385746 | orchestrator | 2025-05-03 01:06:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:28.386233 | orchestrator | 2025-05-03 01:06:28 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:31.426407 | orchestrator | 2025-05-03 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:31.426568 | orchestrator | 2025-05-03 01:06:31 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:31.427890 | orchestrator | 2025-05-03 01:06:31 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:31.428741 | orchestrator | 2025-05-03 01:06:31 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:31.432580 | orchestrator | 2025-05-03 01:06:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:31.435065 | orchestrator | 2025-05-03 01:06:31 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:31.438317 | orchestrator | 2025-05-03 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:34.492783 | orchestrator | 2025-05-03 01:06:34 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:34.494349 | orchestrator | 2025-05-03 01:06:34 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:34.495897 | orchestrator | 2025-05-03 01:06:34 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:34.497356 | orchestrator | 2025-05-03 01:06:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:34.499043 | orchestrator | 2025-05-03 01:06:34 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:37.556703 | orchestrator | 2025-05-03 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:37.556849 | orchestrator | 2025-05-03 01:06:37 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state STARTED 2025-05-03 01:06:37.558197 | orchestrator | 2025-05-03 01:06:37 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:37.560019 | orchestrator | 2025-05-03 01:06:37 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:37.561654 | orchestrator | 2025-05-03 01:06:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:37.563868 | orchestrator | 2025-05-03 01:06:37 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:40.617051 | orchestrator | 2025-05-03 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:40.617191 | orchestrator | 2025-05-03 01:06:40 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:06:40.621206 | orchestrator | 2025-05-03 01:06:40 | INFO  | Task b8afa8e8-8c30-428f-b3f1-be0aeda73f6b is in state SUCCESS 2025-05-03 01:06:40.623559 | orchestrator | 2025-05-03 01:06:40.623610 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-03 01:06:40.623625 | orchestrator | 2025-05-03 01:06:40.623640 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-03 01:06:40.623655 | orchestrator | Saturday 03 May 2025 00:59:08 +0000 (0:00:00.123) 0:00:00.123 ********** 2025-05-03 01:06:40.623669 | orchestrator | changed: [localhost] 2025-05-03 01:06:40.623684 | orchestrator | 2025-05-03 01:06:40.623699 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-03 01:06:40.623713 | orchestrator | Saturday 03 May 2025 00:59:09 +0000 (0:00:00.522) 0:00:00.645 ********** 2025-05-03 01:06:40.623727 | orchestrator | 2025-05-03 01:06:40.623741 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-03 01:06:40.623754 | orchestrator | 2025-05-03 01:06:40.623768 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-03 01:06:40.623782 | orchestrator | 2025-05-03 01:06:40.623796 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-03 01:06:40.623810 | orchestrator | 2025-05-03 01:06:40.623823 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-03 01:06:40.623837 | orchestrator | 2025-05-03 01:06:40.623851 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-03 01:06:40.623865 | orchestrator | 2025-05-03 01:06:40.623921 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-03 01:06:40.623946 | orchestrator | 2025-05-03 01:06:40.623971 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-03 01:06:40.624016 | orchestrator | changed: [localhost] 2025-05-03 01:06:40.624041 | orchestrator | 2025-05-03 01:06:40.624574 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-03 01:06:40.624609 | orchestrator | Saturday 03 May 2025 01:05:04 +0000 (0:05:55.740) 0:05:56.386 ********** 2025-05-03 01:06:40.624623 | orchestrator | changed: [localhost] 2025-05-03 01:06:40.624638 | orchestrator | 2025-05-03 01:06:40.624665 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:06:40.624680 | orchestrator | 2025-05-03 01:06:40.624695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:06:40.624709 | orchestrator | Saturday 03 May 2025 01:05:17 +0000 (0:00:12.753) 0:06:09.140 ********** 2025-05-03 01:06:40.624723 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:06:40.624737 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:06:40.624751 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:06:40.624764 | orchestrator | 2025-05-03 01:06:40.624779 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:06:40.624793 | orchestrator | Saturday 03 May 2025 01:05:18 +0000 (0:00:00.520) 0:06:09.660 ********** 2025-05-03 01:06:40.624814 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-03 01:06:40.624839 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-03 01:06:40.624862 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-03 01:06:40.624885 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-03 01:06:40.624908 | orchestrator | 2025-05-03 01:06:40.624931 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-03 01:06:40.624955 | orchestrator | skipping: no hosts matched 2025-05-03 01:06:40.626415 | orchestrator | 2025-05-03 01:06:40.626441 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:06:40.626457 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:06:40.626505 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:06:40.626522 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:06:40.626537 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:06:40.626555 | orchestrator | 2025-05-03 01:06:40.626576 | orchestrator | 2025-05-03 01:06:40.626591 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:06:40.626605 | orchestrator | Saturday 03 May 2025 01:05:18 +0000 (0:00:00.464) 0:06:10.125 ********** 2025-05-03 01:06:40.626620 | orchestrator | =============================================================================== 2025-05-03 01:06:40.626634 | orchestrator | Download ironic-agent initramfs --------------------------------------- 355.74s 2025-05-03 01:06:40.626648 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.75s 2025-05-03 01:06:40.626662 | orchestrator | Ensure the destination directory exists --------------------------------- 0.52s 2025-05-03 01:06:40.626676 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-05-03 01:06:40.626690 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-05-03 01:06:40.626704 | orchestrator | 2025-05-03 01:06:40.626718 | orchestrator | 2025-05-03 01:06:40.626732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:06:40.626746 | orchestrator | 2025-05-03 01:06:40.626760 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:06:40.626805 | orchestrator | Saturday 03 May 2025 01:02:28 +0000 (0:00:00.344) 0:00:00.344 ********** 2025-05-03 01:06:40.626821 | orchestrator | ok: [testbed-manager] 2025-05-03 01:06:40.626837 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:06:40.626851 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:06:40.626865 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:06:40.626879 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:06:40.626893 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:06:40.626906 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:06:40.626921 | orchestrator | 2025-05-03 01:06:40.626936 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:06:40.626959 | orchestrator | Saturday 03 May 2025 01:02:29 +0000 (0:00:01.046) 0:00:01.390 ********** 2025-05-03 01:06:40.627014 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-03 01:06:40.627032 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-03 01:06:40.627046 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-03 01:06:40.627061 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-03 01:06:40.627075 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-03 01:06:40.627089 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-03 01:06:40.627157 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-03 01:06:40.627174 | orchestrator | 2025-05-03 01:06:40.627197 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-03 01:06:40.627212 | orchestrator | 2025-05-03 01:06:40.627226 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-03 01:06:40.627241 | orchestrator | Saturday 03 May 2025 01:02:30 +0000 (0:00:01.112) 0:00:02.503 ********** 2025-05-03 01:06:40.627255 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:06:40.627270 | orchestrator | 2025-05-03 01:06:40.627284 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-03 01:06:40.627298 | orchestrator | Saturday 03 May 2025 01:02:32 +0000 (0:00:01.493) 0:00:03.997 ********** 2025-05-03 01:06:40.627315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.627345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.627361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.627387 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-03 01:06:40.627415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.627567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.627591 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.627607 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.627624 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.627640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.627699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.627739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.627757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.627774 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.627792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.627808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.627831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.627867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.627888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.627903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.627918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.627942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.627959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.627993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.628017 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-03 01:06:40.628033 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.628054 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.628103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.628162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.628182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.628289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.628317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.628371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.628422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.628544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.628561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.628576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.628618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.628649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.628688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628703 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.628767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.628783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.628804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.628848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.628895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.628909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.628941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.628968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.628989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.629020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.629037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.629051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.629088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.629126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.629169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.629211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.629225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.629240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.629266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.629288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.629312 | orchestrator | 2025-05-03 01:06:40.629329 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-03 01:06:40.629344 | orchestrator | Saturday 03 May 2025 01:02:37 +0000 (0:00:05.354) 0:00:09.351 ********** 2025-05-03 01:06:40.629358 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:06:40.629380 | orchestrator | 2025-05-03 01:06:40.629394 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-03 01:06:40.629418 | orchestrator | Saturday 03 May 2025 01:02:40 +0000 (0:00:02.372) 0:00:11.724 ********** 2025-05-03 01:06:40.629433 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-03 01:06:40.629449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.629463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.629506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.629521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.629557 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.629579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.629606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.629622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.629637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.629651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.629666 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629722 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.629765 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-03 01:06:40.629785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.629800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.629824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.629859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.630710 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.630749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.630774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.630848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.630894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.630918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.630936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.631066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.631089 | orchestrator | 2025-05-03 01:06:40.631104 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-03 01:06:40.631119 | orchestrator | Saturday 03 May 2025 01:02:45 +0000 (0:00:05.740) 0:00:17.464 ********** 2025-05-03 01:06:40.631183 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.631204 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.631220 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.631235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.631277 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.631292 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.631405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.631435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.631451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.631467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.631560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.631576 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.631599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.631640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.631741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.631763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.631778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.631793 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.631808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.631865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.631882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.631912 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.631936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.631951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.632064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.632086 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.632112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.632128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.632187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.632203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.632237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.632266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.632360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.632381 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.632396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.632410 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.632425 | orchestrator | 2025-05-03 01:06:40.632499 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-03 01:06:40.632516 | orchestrator | Saturday 03 May 2025 01:02:47 +0000 (0:00:02.068) 0:00:19.533 ********** 2025-05-03 01:06:40.632530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.632545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.632560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.632597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.632612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.632707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.632728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.632743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.632758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.632848 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.632866 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.632882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.632896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.632997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.633020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.633035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.633050 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.633065 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.633120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.633172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.633190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.633205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.633220 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.633325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.633348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.633368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.633387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.633450 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.633554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.633574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.633589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.633604 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.633618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-03 01:06:40.633736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.633761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.633779 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.633795 | orchestrator | 2025-05-03 01:06:40.633808 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-03 01:06:40.633863 | orchestrator | Saturday 03 May 2025 01:02:50 +0000 (0:00:02.754) 0:00:22.287 ********** 2025-05-03 01:06:40.633890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.633920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.633940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.633988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.634047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.634067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.634088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.634101 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-03 01:06:40.634114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.634167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.634183 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.634206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.634258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634284 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.634334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.634351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.634443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.634458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.634559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.634586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.634601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.634617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.634715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.634739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.634753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.634802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.634853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.634878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.634905 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.634926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.634940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.634953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.634994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.635024 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.635040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.635060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.635083 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-03 01:06:40.635097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.635151 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.635178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.635245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.635259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.635283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.635297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.635353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.635369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.635394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.635408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.635421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.635684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.635855 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.635933 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.635954 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.635971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.635987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.636002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.636178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.636200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.636216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.636245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.636261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.636276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.636291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.636315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.636369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.636387 | orchestrator | 2025-05-03 01:06:40.636403 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-03 01:06:40.636419 | orchestrator | Saturday 03 May 2025 01:02:57 +0000 (0:00:06.905) 0:00:29.193 ********** 2025-05-03 01:06:40.636433 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 01:06:40.636449 | orchestrator | 2025-05-03 01:06:40.636463 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-03 01:06:40.636556 | orchestrator | Saturday 03 May 2025 01:02:58 +0000 (0:00:00.628) 0:00:29.822 ********** 2025-05-03 01:06:40.636572 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1330023, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5366254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636602 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1330023, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5366254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636619 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1330023, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5366254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636634 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1330023, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5366254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636659 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1330039, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636715 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1330023, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5366254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636732 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1330023, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5366254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636748 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1330039, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636774 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1330023, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5366254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.636790 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1330039, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636805 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1330039, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636828 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1330028, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636851 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1330039, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636941 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1330039, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636972 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1330028, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.636997 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1330028, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637021 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1330028, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637048 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1330028, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637094 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330035, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637112 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1330028, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637169 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330035, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637199 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330035, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637214 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330035, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637229 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330035, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637252 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330035, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637268 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330061, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637282 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1330039, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.637347 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330061, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637365 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330061, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637381 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330061, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637395 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330061, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637418 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330061, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637433 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330043, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637449 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330043, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637540 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330043, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637559 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330043, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637575 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330043, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637589 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330043, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637614 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330034, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5386255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637631 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330034, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5386255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637646 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330034, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5386255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637702 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330034, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5386255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637720 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1330028, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.637735 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330034, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5386255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637749 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330034, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5386255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637771 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330041, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637786 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330041, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637801 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330041, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637858 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330041, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637876 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330060, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637891 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330041, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637913 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330060, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637928 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330041, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637942 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330060, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.637957 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330060, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638056 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330060, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638080 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330060, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638095 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330031, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638118 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330031, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638133 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330031, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638148 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330035, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.638172 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1330047, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638188 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.638240 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330031, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638258 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330031, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638273 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330031, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638361 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1330047, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638376 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.638391 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1330047, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638406 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.638420 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1330047, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638435 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.638461 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1330047, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638501 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.638551 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1330047, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-03 01:06:40.638568 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.638583 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330061, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.638606 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330043, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.638621 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330034, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5386255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.638636 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330041, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5396254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.638660 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330060, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5426254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.638676 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330031, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5376253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.638721 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1330047, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5406256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-03 01:06:40.638749 | orchestrator | 2025-05-03 01:06:40.638764 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-03 01:06:40.638779 | orchestrator | Saturday 03 May 2025 01:03:39 +0000 (0:00:41.420) 0:01:11.243 ********** 2025-05-03 01:06:40.638793 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 01:06:40.638807 | orchestrator | 2025-05-03 01:06:40.638821 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-03 01:06:40.638835 | orchestrator | Saturday 03 May 2025 01:03:39 +0000 (0:00:00.428) 0:01:11.671 ********** 2025-05-03 01:06:40.638849 | orchestrator | [WARNING]: Skipped 2025-05-03 01:06:40.638863 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.638879 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-03 01:06:40.638893 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.638906 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-03 01:06:40.638920 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 01:06:40.638934 | orchestrator | [WARNING]: Skipped 2025-05-03 01:06:40.638949 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.638963 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-03 01:06:40.638977 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.638991 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-03 01:06:40.639005 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 01:06:40.639019 | orchestrator | [WARNING]: Skipped 2025-05-03 01:06:40.639033 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639047 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-03 01:06:40.639060 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639074 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-03 01:06:40.639088 | orchestrator | [WARNING]: Skipped 2025-05-03 01:06:40.639102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639116 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-03 01:06:40.639130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639144 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-03 01:06:40.639158 | orchestrator | [WARNING]: Skipped 2025-05-03 01:06:40.639171 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639186 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-03 01:06:40.639199 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639213 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-03 01:06:40.639227 | orchestrator | [WARNING]: Skipped 2025-05-03 01:06:40.639242 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639256 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-03 01:06:40.639270 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639284 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-03 01:06:40.639298 | orchestrator | [WARNING]: Skipped 2025-05-03 01:06:40.639312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639326 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-03 01:06:40.639340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-03 01:06:40.639354 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-03 01:06:40.639368 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-03 01:06:40.639389 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-03 01:06:40.639403 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-03 01:06:40.639417 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-03 01:06:40.639431 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-03 01:06:40.639445 | orchestrator | 2025-05-03 01:06:40.639459 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-03 01:06:40.639493 | orchestrator | Saturday 03 May 2025 01:03:41 +0000 (0:00:01.456) 0:01:13.127 ********** 2025-05-03 01:06:40.639508 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-03 01:06:40.639523 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.639537 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-03 01:06:40.639551 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.639565 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-03 01:06:40.639579 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.639628 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-03 01:06:40.639644 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.639659 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-03 01:06:40.639672 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.639686 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-03 01:06:40.639700 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.639714 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-03 01:06:40.639728 | orchestrator | 2025-05-03 01:06:40.639743 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-03 01:06:40.639757 | orchestrator | Saturday 03 May 2025 01:03:56 +0000 (0:00:14.776) 0:01:27.904 ********** 2025-05-03 01:06:40.639771 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-03 01:06:40.639785 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.639799 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-03 01:06:40.639813 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.639828 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-03 01:06:40.639842 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.639856 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-03 01:06:40.639870 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.639884 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-03 01:06:40.639898 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.639912 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-03 01:06:40.639926 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.639940 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-03 01:06:40.639954 | orchestrator | 2025-05-03 01:06:40.639968 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-03 01:06:40.639982 | orchestrator | Saturday 03 May 2025 01:04:00 +0000 (0:00:04.433) 0:01:32.338 ********** 2025-05-03 01:06:40.639996 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-03 01:06:40.640010 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.640025 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-03 01:06:40.640047 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.640061 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-03 01:06:40.640075 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.640089 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-03 01:06:40.640103 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.640117 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-03 01:06:40.640131 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.640146 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-03 01:06:40.640160 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.640174 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-03 01:06:40.640187 | orchestrator | 2025-05-03 01:06:40.640208 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-03 01:06:40.640222 | orchestrator | Saturday 03 May 2025 01:04:04 +0000 (0:00:04.056) 0:01:36.395 ********** 2025-05-03 01:06:40.640236 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 01:06:40.640250 | orchestrator | 2025-05-03 01:06:40.640264 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-03 01:06:40.640278 | orchestrator | Saturday 03 May 2025 01:04:05 +0000 (0:00:00.678) 0:01:37.073 ********** 2025-05-03 01:06:40.640292 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.640306 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.640319 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.640333 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.640347 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.640361 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.640375 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.640389 | orchestrator | 2025-05-03 01:06:40.640403 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-03 01:06:40.640417 | orchestrator | Saturday 03 May 2025 01:04:06 +0000 (0:00:00.877) 0:01:37.951 ********** 2025-05-03 01:06:40.640431 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.640445 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.640459 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.640500 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.640515 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:06:40.640530 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:06:40.640543 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:06:40.640558 | orchestrator | 2025-05-03 01:06:40.640578 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-03 01:06:40.640593 | orchestrator | Saturday 03 May 2025 01:04:09 +0000 (0:00:03.620) 0:01:41.571 ********** 2025-05-03 01:06:40.640607 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-03 01:06:40.640621 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.640645 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-03 01:06:40.640660 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.640676 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-03 01:06:40.640691 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.640706 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-03 01:06:40.640720 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.640734 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-03 01:06:40.640755 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.640769 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-03 01:06:40.640783 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.640797 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-03 01:06:40.640811 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.640825 | orchestrator | 2025-05-03 01:06:40.640839 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-03 01:06:40.640853 | orchestrator | Saturday 03 May 2025 01:04:12 +0000 (0:00:02.530) 0:01:44.101 ********** 2025-05-03 01:06:40.640867 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-03 01:06:40.640882 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.640896 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-03 01:06:40.640911 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.640924 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-03 01:06:40.640939 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.640953 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-03 01:06:40.640967 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.640981 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-03 01:06:40.640995 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.641010 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-03 01:06:40.641024 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.641043 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-03 01:06:40.641057 | orchestrator | 2025-05-03 01:06:40.641071 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-03 01:06:40.641086 | orchestrator | Saturday 03 May 2025 01:04:15 +0000 (0:00:03.442) 0:01:47.544 ********** 2025-05-03 01:06:40.641099 | orchestrator | [WARNING]: Skipped 2025-05-03 01:06:40.641114 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-03 01:06:40.641128 | orchestrator | due to this access issue: 2025-05-03 01:06:40.641142 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-03 01:06:40.641156 | orchestrator | not a directory 2025-05-03 01:06:40.641177 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-03 01:06:40.641191 | orchestrator | 2025-05-03 01:06:40.641205 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-03 01:06:40.641219 | orchestrator | Saturday 03 May 2025 01:04:17 +0000 (0:00:02.057) 0:01:49.602 ********** 2025-05-03 01:06:40.641233 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.641247 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.641261 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.641275 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.641289 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.641303 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.641317 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.641331 | orchestrator | 2025-05-03 01:06:40.641344 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-03 01:06:40.641358 | orchestrator | Saturday 03 May 2025 01:04:18 +0000 (0:00:01.001) 0:01:50.603 ********** 2025-05-03 01:06:40.641372 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.641386 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.641400 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.641421 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.641435 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.641449 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.641463 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.641494 | orchestrator | 2025-05-03 01:06:40.641508 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-03 01:06:40.641523 | orchestrator | Saturday 03 May 2025 01:04:19 +0000 (0:00:00.668) 0:01:51.272 ********** 2025-05-03 01:06:40.641537 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-03 01:06:40.641551 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.641576 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-03 01:06:40.641591 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.641605 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-03 01:06:40.641619 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.641634 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-03 01:06:40.641648 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.641662 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-03 01:06:40.641676 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.641695 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-03 01:06:40.641710 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.641724 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-03 01:06:40.641738 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.641752 | orchestrator | 2025-05-03 01:06:40.641766 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-03 01:06:40.641780 | orchestrator | Saturday 03 May 2025 01:04:21 +0000 (0:00:02.311) 0:01:53.584 ********** 2025-05-03 01:06:40.641794 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-03 01:06:40.641808 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:06:40.641822 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-03 01:06:40.641837 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:06:40.641851 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-03 01:06:40.641865 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:06:40.641879 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-03 01:06:40.641893 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:06:40.641907 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-03 01:06:40.641921 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:06:40.641935 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-03 01:06:40.641949 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:06:40.641963 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-03 01:06:40.641977 | orchestrator | skipping: [testbed-manager] 2025-05-03 01:06:40.641991 | orchestrator | 2025-05-03 01:06:40.642005 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-03 01:06:40.642048 | orchestrator | Saturday 03 May 2025 01:04:24 +0000 (0:00:02.528) 0:01:56.112 ********** 2025-05-03 01:06:40.642066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.642091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.642116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.642144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.642160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.642175 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-03 01:06:40.642196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.642213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.642235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-03 01:06:40.642259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.642275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.642327 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.642342 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642373 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.642405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.642460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-03 01:06:40.642528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.642604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.642655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.642693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.642710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642760 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.642776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.642791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.642806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.642836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.642850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.642896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.642910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.642938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.642951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.642984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.642998 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-03 01:06:40.643018 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.643031 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.643066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.643100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.643113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.643154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.643168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.643188 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.643201 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.643224 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.643255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.643268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.643302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.643333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-03 01:06:40.643346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-03 01:06:40.643360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-03 01:06:40.643373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.643393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.643434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.643461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.643507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-03 01:06:40.643541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-03 01:06:40.643586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-03 01:06:40.643599 | orchestrator | 2025-05-03 01:06:40.643612 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-03 01:06:40.643625 | orchestrator | Saturday 03 May 2025 01:04:29 +0000 (0:00:05.041) 0:02:01.153 ********** 2025-05-03 01:06:40.643638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-03 01:06:40.643650 | orchestrator | 2025-05-03 01:06:40.643663 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-03 01:06:40.643675 | orchestrator | Saturday 03 May 2025 01:04:32 +0000 (0:00:03.008) 0:02:04.162 ********** 2025-05-03 01:06:40.643688 | orchestrator | 2025-05-03 01:06:40.643700 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-03 01:06:40.643712 | orchestrator | Saturday 03 May 2025 01:04:32 +0000 (0:00:00.061) 0:02:04.223 ********** 2025-05-03 01:06:40.643724 | orchestrator | 2025-05-03 01:06:40.643737 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-03 01:06:40.643749 | orchestrator | Saturday 03 May 2025 01:04:32 +0000 (0:00:00.290) 0:02:04.513 ********** 2025-05-03 01:06:40.643761 | orchestrator | 2025-05-03 01:06:40.643778 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-03 01:06:40.643791 | orchestrator | Saturday 03 May 2025 01:04:32 +0000 (0:00:00.059) 0:02:04.573 ********** 2025-05-03 01:06:40.643803 | orchestrator | 2025-05-03 01:06:40.643816 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-03 01:06:40.643828 | orchestrator | Saturday 03 May 2025 01:04:32 +0000 (0:00:00.055) 0:02:04.629 ********** 2025-05-03 01:06:40.643840 | orchestrator | 2025-05-03 01:06:40.643853 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-03 01:06:40.643865 | orchestrator | Saturday 03 May 2025 01:04:32 +0000 (0:00:00.053) 0:02:04.682 ********** 2025-05-03 01:06:40.643877 | orchestrator | 2025-05-03 01:06:40.643889 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-03 01:06:40.643902 | orchestrator | Saturday 03 May 2025 01:04:33 +0000 (0:00:00.319) 0:02:05.002 ********** 2025-05-03 01:06:40.643914 | orchestrator | 2025-05-03 01:06:40.643926 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-03 01:06:40.643938 | orchestrator | Saturday 03 May 2025 01:04:33 +0000 (0:00:00.073) 0:02:05.075 ********** 2025-05-03 01:06:40.643951 | orchestrator | changed: [testbed-manager] 2025-05-03 01:06:40.643963 | orchestrator | 2025-05-03 01:06:40.643975 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-03 01:06:40.643987 | orchestrator | Saturday 03 May 2025 01:04:51 +0000 (0:00:17.761) 0:02:22.837 ********** 2025-05-03 01:06:40.644000 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:06:40.644012 | orchestrator | changed: [testbed-manager] 2025-05-03 01:06:40.644024 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:06:40.644044 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:06:40.644056 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:06:40.644068 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:06:40.644081 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:06:40.644098 | orchestrator | 2025-05-03 01:06:40.644111 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-03 01:06:40.644123 | orchestrator | Saturday 03 May 2025 01:05:11 +0000 (0:00:20.333) 0:02:43.170 ********** 2025-05-03 01:06:40.644136 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:06:40.644148 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:06:40.644160 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:06:40.644172 | orchestrator | 2025-05-03 01:06:40.644185 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-03 01:06:40.644197 | orchestrator | Saturday 03 May 2025 01:05:24 +0000 (0:00:12.980) 0:02:56.151 ********** 2025-05-03 01:06:40.644210 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:06:40.644222 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:06:40.644234 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:06:40.644247 | orchestrator | 2025-05-03 01:06:40.644259 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-03 01:06:40.644272 | orchestrator | Saturday 03 May 2025 01:05:40 +0000 (0:00:15.593) 0:03:11.744 ********** 2025-05-03 01:06:40.644284 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:06:40.644302 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:06:40.644315 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:06:40.644327 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:06:40.644339 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:06:40.644351 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:06:40.644364 | orchestrator | changed: [testbed-manager] 2025-05-03 01:06:40.644376 | orchestrator | 2025-05-03 01:06:40.644388 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-03 01:06:40.644401 | orchestrator | Saturday 03 May 2025 01:06:03 +0000 (0:00:22.968) 0:03:34.713 ********** 2025-05-03 01:06:40.644413 | orchestrator | changed: [testbed-manager] 2025-05-03 01:06:40.644426 | orchestrator | 2025-05-03 01:06:40.644438 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-03 01:06:40.644451 | orchestrator | Saturday 03 May 2025 01:06:11 +0000 (0:00:08.744) 0:03:43.457 ********** 2025-05-03 01:06:40.644463 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:06:40.644520 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:06:40.644533 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:06:40.644546 | orchestrator | 2025-05-03 01:06:40.644558 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-03 01:06:40.644571 | orchestrator | Saturday 03 May 2025 01:06:19 +0000 (0:00:07.766) 0:03:51.224 ********** 2025-05-03 01:06:40.644583 | orchestrator | changed: [testbed-manager] 2025-05-03 01:06:40.644595 | orchestrator | 2025-05-03 01:06:40.644608 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-03 01:06:40.644620 | orchestrator | Saturday 03 May 2025 01:06:26 +0000 (0:00:07.352) 0:03:58.576 ********** 2025-05-03 01:06:40.644632 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:06:40.644645 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:06:40.644657 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:06:40.644669 | orchestrator | 2025-05-03 01:06:40.644682 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:06:40.644695 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-03 01:06:40.644710 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-03 01:06:40.644722 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-03 01:06:40.644742 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-03 01:06:40.644755 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-03 01:06:40.644767 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-03 01:06:40.644785 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-03 01:06:40.644806 | orchestrator | 2025-05-03 01:06:40.644830 | orchestrator | 2025-05-03 01:06:40.644850 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:06:40.644872 | orchestrator | Saturday 03 May 2025 01:06:38 +0000 (0:00:12.064) 0:04:10.640 ********** 2025-05-03 01:06:40.644893 | orchestrator | =============================================================================== 2025-05-03 01:06:40.644912 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 41.42s 2025-05-03 01:06:40.644938 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 22.97s 2025-05-03 01:06:40.644958 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 20.33s 2025-05-03 01:06:40.644976 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.76s 2025-05-03 01:06:40.644995 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 15.59s 2025-05-03 01:06:40.645014 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.78s 2025-05-03 01:06:40.645033 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.98s 2025-05-03 01:06:40.645049 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.06s 2025-05-03 01:06:40.645065 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.74s 2025-05-03 01:06:40.645081 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 7.77s 2025-05-03 01:06:40.645097 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.35s 2025-05-03 01:06:40.645112 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.91s 2025-05-03 01:06:40.645129 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.74s 2025-05-03 01:06:40.645146 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.35s 2025-05-03 01:06:40.645162 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.04s 2025-05-03 01:06:40.645177 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.43s 2025-05-03 01:06:40.645191 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.06s 2025-05-03 01:06:40.645207 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.62s 2025-05-03 01:06:40.645231 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.44s 2025-05-03 01:06:43.690753 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 3.01s 2025-05-03 01:06:43.690875 | orchestrator | 2025-05-03 01:06:40 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:43.690895 | orchestrator | 2025-05-03 01:06:40 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:43.690910 | orchestrator | 2025-05-03 01:06:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:43.690924 | orchestrator | 2025-05-03 01:06:40 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:43.690938 | orchestrator | 2025-05-03 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:43.690969 | orchestrator | 2025-05-03 01:06:43 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:06:43.692738 | orchestrator | 2025-05-03 01:06:43 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:43.694126 | orchestrator | 2025-05-03 01:06:43 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:43.695657 | orchestrator | 2025-05-03 01:06:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:43.697759 | orchestrator | 2025-05-03 01:06:43 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:43.698671 | orchestrator | 2025-05-03 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:46.745207 | orchestrator | 2025-05-03 01:06:46 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:06:46.746647 | orchestrator | 2025-05-03 01:06:46 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:46.748160 | orchestrator | 2025-05-03 01:06:46 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:46.750394 | orchestrator | 2025-05-03 01:06:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:46.751621 | orchestrator | 2025-05-03 01:06:46 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:46.751830 | orchestrator | 2025-05-03 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:49.794741 | orchestrator | 2025-05-03 01:06:49 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:06:49.795616 | orchestrator | 2025-05-03 01:06:49 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:49.797512 | orchestrator | 2025-05-03 01:06:49 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:49.800376 | orchestrator | 2025-05-03 01:06:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:49.802148 | orchestrator | 2025-05-03 01:06:49 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:49.802563 | orchestrator | 2025-05-03 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:52.852572 | orchestrator | 2025-05-03 01:06:52 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:06:52.853754 | orchestrator | 2025-05-03 01:06:52 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:52.854665 | orchestrator | 2025-05-03 01:06:52 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:52.856587 | orchestrator | 2025-05-03 01:06:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:52.858008 | orchestrator | 2025-05-03 01:06:52 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:55.907415 | orchestrator | 2025-05-03 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:55.907638 | orchestrator | 2025-05-03 01:06:55 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:06:55.908051 | orchestrator | 2025-05-03 01:06:55 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:55.908096 | orchestrator | 2025-05-03 01:06:55 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:55.908830 | orchestrator | 2025-05-03 01:06:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:55.910126 | orchestrator | 2025-05-03 01:06:55 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:06:58.985735 | orchestrator | 2025-05-03 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:06:58.985861 | orchestrator | 2025-05-03 01:06:58 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:06:58.986583 | orchestrator | 2025-05-03 01:06:58 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:06:58.986634 | orchestrator | 2025-05-03 01:06:58 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:06:58.988679 | orchestrator | 2025-05-03 01:06:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:06:58.989693 | orchestrator | 2025-05-03 01:06:58 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:02.039899 | orchestrator | 2025-05-03 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:02.040030 | orchestrator | 2025-05-03 01:07:02 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:02.041252 | orchestrator | 2025-05-03 01:07:02 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:02.044124 | orchestrator | 2025-05-03 01:07:02 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:02.046169 | orchestrator | 2025-05-03 01:07:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:02.048474 | orchestrator | 2025-05-03 01:07:02 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:02.048934 | orchestrator | 2025-05-03 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:05.092901 | orchestrator | 2025-05-03 01:07:05 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:08.148219 | orchestrator | 2025-05-03 01:07:05 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:08.148337 | orchestrator | 2025-05-03 01:07:05 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:08.148356 | orchestrator | 2025-05-03 01:07:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:08.148371 | orchestrator | 2025-05-03 01:07:05 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:08.148386 | orchestrator | 2025-05-03 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:08.148417 | orchestrator | 2025-05-03 01:07:08 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:08.149751 | orchestrator | 2025-05-03 01:07:08 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:08.151428 | orchestrator | 2025-05-03 01:07:08 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:08.153041 | orchestrator | 2025-05-03 01:07:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:08.154661 | orchestrator | 2025-05-03 01:07:08 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:11.210240 | orchestrator | 2025-05-03 01:07:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:11.210339 | orchestrator | 2025-05-03 01:07:11 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:11.211010 | orchestrator | 2025-05-03 01:07:11 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:11.211231 | orchestrator | 2025-05-03 01:07:11 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:11.212078 | orchestrator | 2025-05-03 01:07:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:11.213168 | orchestrator | 2025-05-03 01:07:11 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:14.257676 | orchestrator | 2025-05-03 01:07:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:14.257838 | orchestrator | 2025-05-03 01:07:14 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:14.261651 | orchestrator | 2025-05-03 01:07:14 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:14.263224 | orchestrator | 2025-05-03 01:07:14 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:14.266911 | orchestrator | 2025-05-03 01:07:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:14.272778 | orchestrator | 2025-05-03 01:07:14 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:17.308924 | orchestrator | 2025-05-03 01:07:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:17.309052 | orchestrator | 2025-05-03 01:07:17 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:17.309334 | orchestrator | 2025-05-03 01:07:17 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:17.310151 | orchestrator | 2025-05-03 01:07:17 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:17.310828 | orchestrator | 2025-05-03 01:07:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:17.311491 | orchestrator | 2025-05-03 01:07:17 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:20.338704 | orchestrator | 2025-05-03 01:07:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:20.338824 | orchestrator | 2025-05-03 01:07:20 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:20.340708 | orchestrator | 2025-05-03 01:07:20 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:20.340748 | orchestrator | 2025-05-03 01:07:20 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:20.341253 | orchestrator | 2025-05-03 01:07:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:20.341895 | orchestrator | 2025-05-03 01:07:20 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:20.342198 | orchestrator | 2025-05-03 01:07:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:23.374012 | orchestrator | 2025-05-03 01:07:23 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:23.374372 | orchestrator | 2025-05-03 01:07:23 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:23.375177 | orchestrator | 2025-05-03 01:07:23 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:23.375862 | orchestrator | 2025-05-03 01:07:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:23.376656 | orchestrator | 2025-05-03 01:07:23 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:26.421657 | orchestrator | 2025-05-03 01:07:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:26.421820 | orchestrator | 2025-05-03 01:07:26 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:26.423538 | orchestrator | 2025-05-03 01:07:26 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:26.425350 | orchestrator | 2025-05-03 01:07:26 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:26.428208 | orchestrator | 2025-05-03 01:07:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:26.432009 | orchestrator | 2025-05-03 01:07:26 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:29.478171 | orchestrator | 2025-05-03 01:07:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:29.478312 | orchestrator | 2025-05-03 01:07:29 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:29.479235 | orchestrator | 2025-05-03 01:07:29 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:29.481035 | orchestrator | 2025-05-03 01:07:29 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state STARTED 2025-05-03 01:07:29.482722 | orchestrator | 2025-05-03 01:07:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:29.483102 | orchestrator | 2025-05-03 01:07:29 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:32.532027 | orchestrator | 2025-05-03 01:07:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:32.532186 | orchestrator | 2025-05-03 01:07:32 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:32.534155 | orchestrator | 2025-05-03 01:07:32 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:32.536735 | orchestrator | 2025-05-03 01:07:32 | INFO  | Task 597b062e-4318-4c0e-bf14-dc89c9236159 is in state SUCCESS 2025-05-03 01:07:32.538806 | orchestrator | 2025-05-03 01:07:32.538851 | orchestrator | 2025-05-03 01:07:32.538867 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:07:32.538882 | orchestrator | 2025-05-03 01:07:32.538897 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:07:32.538911 | orchestrator | Saturday 03 May 2025 01:04:28 +0000 (0:00:00.284) 0:00:00.284 ********** 2025-05-03 01:07:32.538926 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:07:32.538942 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:07:32.539102 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:07:32.539118 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:07:32.539132 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:07:32.539146 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:07:32.539160 | orchestrator | 2025-05-03 01:07:32.539525 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:07:32.539543 | orchestrator | Saturday 03 May 2025 01:04:29 +0000 (0:00:00.740) 0:00:01.024 ********** 2025-05-03 01:07:32.539557 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-03 01:07:32.539571 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-03 01:07:32.539586 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-03 01:07:32.539600 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-03 01:07:32.539614 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-03 01:07:32.539628 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-03 01:07:32.539641 | orchestrator | 2025-05-03 01:07:32.539656 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-03 01:07:32.539671 | orchestrator | 2025-05-03 01:07:32.539686 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-03 01:07:32.539699 | orchestrator | Saturday 03 May 2025 01:04:30 +0000 (0:00:00.989) 0:00:02.014 ********** 2025-05-03 01:07:32.539714 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:07:32.539754 | orchestrator | 2025-05-03 01:07:32.539769 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-03 01:07:32.539783 | orchestrator | Saturday 03 May 2025 01:04:32 +0000 (0:00:01.508) 0:00:03.522 ********** 2025-05-03 01:07:32.539797 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-03 01:07:32.539811 | orchestrator | 2025-05-03 01:07:32.539825 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-03 01:07:32.539838 | orchestrator | Saturday 03 May 2025 01:04:35 +0000 (0:00:03.201) 0:00:06.724 ********** 2025-05-03 01:07:32.539853 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-03 01:07:32.539867 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-03 01:07:32.539881 | orchestrator | 2025-05-03 01:07:32.539894 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-03 01:07:32.539951 | orchestrator | Saturday 03 May 2025 01:04:41 +0000 (0:00:06.195) 0:00:12.920 ********** 2025-05-03 01:07:32.539967 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:07:32.539982 | orchestrator | 2025-05-03 01:07:32.539996 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-03 01:07:32.540009 | orchestrator | Saturday 03 May 2025 01:04:45 +0000 (0:00:03.527) 0:00:16.448 ********** 2025-05-03 01:07:32.540023 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:07:32.540037 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-03 01:07:32.540051 | orchestrator | 2025-05-03 01:07:32.540065 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-03 01:07:32.540079 | orchestrator | Saturday 03 May 2025 01:04:48 +0000 (0:00:03.719) 0:00:20.167 ********** 2025-05-03 01:07:32.540093 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:07:32.540182 | orchestrator | 2025-05-03 01:07:32.540240 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-03 01:07:32.540257 | orchestrator | Saturday 03 May 2025 01:04:52 +0000 (0:00:03.187) 0:00:23.355 ********** 2025-05-03 01:07:32.540273 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-03 01:07:32.540290 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-03 01:07:32.540305 | orchestrator | 2025-05-03 01:07:32.540321 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-03 01:07:32.540338 | orchestrator | Saturday 03 May 2025 01:05:01 +0000 (0:00:09.513) 0:00:32.869 ********** 2025-05-03 01:07:32.540368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.540389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.540471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.540923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.540944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.540991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.541085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.541149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.541674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.541699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.541715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.541786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.541817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.541833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.541848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.541862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.541910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.541946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.541962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.541976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.541991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.542014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.542128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.542147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.542162 | orchestrator | 2025-05-03 01:07:32.542176 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-03 01:07:32.542191 | orchestrator | Saturday 03 May 2025 01:05:04 +0000 (0:00:02.830) 0:00:35.700 ********** 2025-05-03 01:07:32.542205 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.542220 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:32.542234 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:32.542249 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:07:32.542263 | orchestrator | 2025-05-03 01:07:32.542277 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-03 01:07:32.542291 | orchestrator | Saturday 03 May 2025 01:05:05 +0000 (0:00:00.969) 0:00:36.669 ********** 2025-05-03 01:07:32.542306 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-03 01:07:32.542323 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-03 01:07:32.542339 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-03 01:07:32.542356 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-03 01:07:32.542372 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-03 01:07:32.542390 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-03 01:07:32.542435 | orchestrator | 2025-05-03 01:07:32.542451 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-03 01:07:32.542468 | orchestrator | Saturday 03 May 2025 01:05:08 +0000 (0:00:02.971) 0:00:39.641 ********** 2025-05-03 01:07:32.542485 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-03 01:07:32.542510 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-03 01:07:32.542558 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-03 01:07:32.542575 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-03 01:07:32.542608 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-03 01:07:32.542624 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-03 01:07:32.542646 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-03 01:07:32.542694 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-03 01:07:32.542711 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-03 01:07:32.542726 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-03 01:07:32.542742 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-03 01:07:32.542801 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-03 01:07:32.542819 | orchestrator | 2025-05-03 01:07:32.542834 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-03 01:07:32.542848 | orchestrator | Saturday 03 May 2025 01:05:11 +0000 (0:00:03.633) 0:00:43.274 ********** 2025-05-03 01:07:32.542862 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:32.542877 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:32.542891 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:32.542905 | orchestrator | 2025-05-03 01:07:32.542919 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-03 01:07:32.542938 | orchestrator | Saturday 03 May 2025 01:05:14 +0000 (0:00:02.505) 0:00:45.780 ********** 2025-05-03 01:07:32.542953 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-03 01:07:32.542967 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-03 01:07:32.542981 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-03 01:07:32.542995 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-03 01:07:32.543009 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-03 01:07:32.543024 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-03 01:07:32.543037 | orchestrator | 2025-05-03 01:07:32.543051 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-03 01:07:32.543065 | orchestrator | Saturday 03 May 2025 01:05:17 +0000 (0:00:03.498) 0:00:49.278 ********** 2025-05-03 01:07:32.543079 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-03 01:07:32.543093 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-03 01:07:32.543107 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-03 01:07:32.543122 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-03 01:07:32.543135 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-03 01:07:32.543149 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-03 01:07:32.543163 | orchestrator | 2025-05-03 01:07:32.543177 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-03 01:07:32.543191 | orchestrator | Saturday 03 May 2025 01:05:19 +0000 (0:00:01.166) 0:00:50.445 ********** 2025-05-03 01:07:32.543205 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.543219 | orchestrator | 2025-05-03 01:07:32.543233 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-03 01:07:32.543247 | orchestrator | Saturday 03 May 2025 01:05:19 +0000 (0:00:00.110) 0:00:50.555 ********** 2025-05-03 01:07:32.543261 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.543281 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:32.543295 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:32.543309 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:07:32.543323 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:07:32.543337 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:07:32.543454 | orchestrator | 2025-05-03 01:07:32.543478 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-03 01:07:32.543492 | orchestrator | Saturday 03 May 2025 01:05:20 +0000 (0:00:00.837) 0:00:51.393 ********** 2025-05-03 01:07:32.543508 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:07:32.543523 | orchestrator | 2025-05-03 01:07:32.543537 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-03 01:07:32.543552 | orchestrator | Saturday 03 May 2025 01:05:21 +0000 (0:00:01.603) 0:00:52.996 ********** 2025-05-03 01:07:32.543566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.543621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.543651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.543667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.543921 | orchestrator | 2025-05-03 01:07:32.543936 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-03 01:07:32.543950 | orchestrator | Saturday 03 May 2025 01:05:24 +0000 (0:00:03.085) 0:00:56.082 ********** 2025-05-03 01:07:32.544004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.544022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544044 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.544059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.544073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544088 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:32.544102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.544157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544176 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:32.544190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544228 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:07:32.544242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544271 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:07:32.544328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544368 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:07:32.544383 | orchestrator | 2025-05-03 01:07:32.544425 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-03 01:07:32.544443 | orchestrator | Saturday 03 May 2025 01:05:27 +0000 (0:00:02.391) 0:00:58.474 ********** 2025-05-03 01:07:32.544460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.544477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544493 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.544510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544587 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:07:32.544601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.544624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544638 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:32.544653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544692 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:07:32.544737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.544765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544780 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:32.544795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544824 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:07:32.544838 | orchestrator | 2025-05-03 01:07:32.544852 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-03 01:07:32.544867 | orchestrator | Saturday 03 May 2025 01:05:29 +0000 (0:00:02.646) 0:01:01.121 ********** 2025-05-03 01:07:32.544881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.544933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.544959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.544974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.544989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.545004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.545082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.545112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.545695 | orchestrator | 2025-05-03 01:07:32.545710 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-03 01:07:32.545725 | orchestrator | Saturday 03 May 2025 01:05:33 +0000 (0:00:03.822) 0:01:04.943 ********** 2025-05-03 01:07:32.545739 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-03 01:07:32.545754 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:07:32.545769 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-03 01:07:32.545783 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:07:32.545803 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-03 01:07:32.545818 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:07:32.545831 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-03 01:07:32.545843 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-03 01:07:32.545856 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-03 01:07:32.545868 | orchestrator | 2025-05-03 01:07:32.545881 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-03 01:07:32.545893 | orchestrator | Saturday 03 May 2025 01:05:36 +0000 (0:00:02.969) 0:01:07.913 ********** 2025-05-03 01:07:32.545906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.545919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.545960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.545974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.545991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.546060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.546098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.546140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546295 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.546328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546354 | orchestrator | 2025-05-03 01:07:32.546372 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-03 01:07:32.546385 | orchestrator | Saturday 03 May 2025 01:05:52 +0000 (0:00:15.794) 0:01:23.708 ********** 2025-05-03 01:07:32.546433 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.546448 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:32.546461 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:32.546473 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:07:32.546486 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:07:32.546498 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:07:32.546511 | orchestrator | 2025-05-03 01:07:32.546523 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-03 01:07:32.546536 | orchestrator | Saturday 03 May 2025 01:05:56 +0000 (0:00:03.842) 0:01:27.550 ********** 2025-05-03 01:07:32.546548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.546562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.546626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546698 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:32.546711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.546729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.546797 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.546810 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:07:32.546822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546868 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:32.546881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.546908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.546948 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:07:32.546968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.546982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547036 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:07:32.547048 | orchestrator | 2025-05-03 01:07:32.547061 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-03 01:07:32.547073 | orchestrator | Saturday 03 May 2025 01:05:57 +0000 (0:00:01.565) 0:01:29.117 ********** 2025-05-03 01:07:32.547086 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.547099 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:32.547111 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:32.547123 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:07:32.547136 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:07:32.547148 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:07:32.547161 | orchestrator | 2025-05-03 01:07:32.547173 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-03 01:07:32.547186 | orchestrator | Saturday 03 May 2025 01:05:58 +0000 (0:00:00.934) 0:01:30.051 ********** 2025-05-03 01:07:32.547204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.547226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.547270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-03 01:07:32.547296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.547335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.547356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-03 01:07:32.547370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-03 01:07:32.547614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547635 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-03 01:07:32.547662 | orchestrator | 2025-05-03 01:07:32.547675 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-03 01:07:32.547687 | orchestrator | Saturday 03 May 2025 01:06:02 +0000 (0:00:03.385) 0:01:33.437 ********** 2025-05-03 01:07:32.547700 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:32.547712 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:32.547725 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:32.547737 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:07:32.547750 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:07:32.547762 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:07:32.547775 | orchestrator | 2025-05-03 01:07:32.547787 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-03 01:07:32.547800 | orchestrator | Saturday 03 May 2025 01:06:02 +0000 (0:00:00.722) 0:01:34.160 ********** 2025-05-03 01:07:32.547812 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:32.547824 | orchestrator | 2025-05-03 01:07:32.547837 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-03 01:07:32.547849 | orchestrator | Saturday 03 May 2025 01:06:05 +0000 (0:00:02.406) 0:01:36.566 ********** 2025-05-03 01:07:32.547862 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:32.547874 | orchestrator | 2025-05-03 01:07:32.547886 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-03 01:07:32.547899 | orchestrator | Saturday 03 May 2025 01:06:07 +0000 (0:00:02.233) 0:01:38.799 ********** 2025-05-03 01:07:32.547911 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:32.547923 | orchestrator | 2025-05-03 01:07:32.547936 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-03 01:07:32.547948 | orchestrator | Saturday 03 May 2025 01:06:25 +0000 (0:00:17.917) 0:01:56.717 ********** 2025-05-03 01:07:32.547960 | orchestrator | 2025-05-03 01:07:32.547973 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-03 01:07:32.547985 | orchestrator | Saturday 03 May 2025 01:06:25 +0000 (0:00:00.049) 0:01:56.767 ********** 2025-05-03 01:07:32.547997 | orchestrator | 2025-05-03 01:07:32.548010 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-03 01:07:32.548022 | orchestrator | Saturday 03 May 2025 01:06:25 +0000 (0:00:00.142) 0:01:56.910 ********** 2025-05-03 01:07:32.548034 | orchestrator | 2025-05-03 01:07:32.548052 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-03 01:07:32.548064 | orchestrator | Saturday 03 May 2025 01:06:25 +0000 (0:00:00.047) 0:01:56.958 ********** 2025-05-03 01:07:32.548083 | orchestrator | 2025-05-03 01:07:32.548095 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-03 01:07:32.548108 | orchestrator | Saturday 03 May 2025 01:06:25 +0000 (0:00:00.047) 0:01:57.005 ********** 2025-05-03 01:07:32.548120 | orchestrator | 2025-05-03 01:07:32.548132 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-03 01:07:32.548145 | orchestrator | Saturday 03 May 2025 01:06:25 +0000 (0:00:00.047) 0:01:57.053 ********** 2025-05-03 01:07:32.548157 | orchestrator | 2025-05-03 01:07:32.548169 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-03 01:07:32.548181 | orchestrator | Saturday 03 May 2025 01:06:25 +0000 (0:00:00.155) 0:01:57.209 ********** 2025-05-03 01:07:32.548194 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:32.548206 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:07:32.548218 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:07:32.548231 | orchestrator | 2025-05-03 01:07:32.548243 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-03 01:07:32.548255 | orchestrator | Saturday 03 May 2025 01:06:49 +0000 (0:00:23.502) 0:02:20.712 ********** 2025-05-03 01:07:32.548267 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:32.548280 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:07:32.548292 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:07:32.548305 | orchestrator | 2025-05-03 01:07:32.548317 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-03 01:07:32.548335 | orchestrator | Saturday 03 May 2025 01:06:54 +0000 (0:00:05.434) 0:02:26.146 ********** 2025-05-03 01:07:35.604587 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:07:35.604713 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:07:35.604733 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:07:35.604749 | orchestrator | 2025-05-03 01:07:35.604765 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-03 01:07:35.604781 | orchestrator | Saturday 03 May 2025 01:07:18 +0000 (0:00:23.981) 0:02:50.128 ********** 2025-05-03 01:07:35.604795 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:07:35.604809 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:07:35.604823 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:07:35.604837 | orchestrator | 2025-05-03 01:07:35.604852 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-03 01:07:35.604867 | orchestrator | Saturday 03 May 2025 01:07:30 +0000 (0:00:11.492) 0:03:01.620 ********** 2025-05-03 01:07:35.604881 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.604895 | orchestrator | 2025-05-03 01:07:35.604909 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:07:35.604924 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-03 01:07:35.604940 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-03 01:07:35.604955 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-03 01:07:35.604969 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:07:35.604983 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:07:35.604998 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:07:35.605012 | orchestrator | 2025-05-03 01:07:35.605026 | orchestrator | 2025-05-03 01:07:35.605040 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:07:35.605083 | orchestrator | Saturday 03 May 2025 01:07:30 +0000 (0:00:00.535) 0:03:02.156 ********** 2025-05-03 01:07:35.605099 | orchestrator | =============================================================================== 2025-05-03 01:07:35.605115 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.98s 2025-05-03 01:07:35.605131 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.50s 2025-05-03 01:07:35.605147 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.92s 2025-05-03 01:07:35.605163 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.79s 2025-05-03 01:07:35.605179 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.49s 2025-05-03 01:07:35.605195 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.51s 2025-05-03 01:07:35.605211 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.20s 2025-05-03 01:07:35.605227 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.43s 2025-05-03 01:07:35.605243 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.84s 2025-05-03 01:07:35.605260 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.82s 2025-05-03 01:07:35.605276 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.72s 2025-05-03 01:07:35.605306 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.63s 2025-05-03 01:07:35.605323 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.53s 2025-05-03 01:07:35.605338 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.50s 2025-05-03 01:07:35.605355 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.39s 2025-05-03 01:07:35.605371 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.20s 2025-05-03 01:07:35.605388 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.19s 2025-05-03 01:07:35.605427 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.09s 2025-05-03 01:07:35.605444 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.97s 2025-05-03 01:07:35.605460 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.97s 2025-05-03 01:07:35.605475 | orchestrator | 2025-05-03 01:07:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:35.605490 | orchestrator | 2025-05-03 01:07:32 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:07:35.605504 | orchestrator | 2025-05-03 01:07:32 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state STARTED 2025-05-03 01:07:35.605519 | orchestrator | 2025-05-03 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:35.605550 | orchestrator | 2025-05-03 01:07:35 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:35.607555 | orchestrator | 2025-05-03 01:07:35 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:07:35.610144 | orchestrator | 2025-05-03 01:07:35 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:35.612080 | orchestrator | 2025-05-03 01:07:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:35.613449 | orchestrator | 2025-05-03 01:07:35 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:07:35.615191 | orchestrator | 2025-05-03 01:07:35 | INFO  | Task 06a874a6-92e6-4a0a-af22-e9d522aa1078 is in state SUCCESS 2025-05-03 01:07:35.617177 | orchestrator | 2025-05-03 01:07:35.617223 | orchestrator | 2025-05-03 01:07:35.617239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:07:35.617584 | orchestrator | 2025-05-03 01:07:35.617621 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:07:35.617676 | orchestrator | Saturday 03 May 2025 01:04:18 +0000 (0:00:00.270) 0:00:00.270 ********** 2025-05-03 01:07:35.617704 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:07:35.617729 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:07:35.617747 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:07:35.617764 | orchestrator | 2025-05-03 01:07:35.617781 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:07:35.617797 | orchestrator | Saturday 03 May 2025 01:04:18 +0000 (0:00:00.255) 0:00:00.526 ********** 2025-05-03 01:07:35.617814 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-03 01:07:35.617830 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-03 01:07:35.617846 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-03 01:07:35.617862 | orchestrator | 2025-05-03 01:07:35.617880 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-03 01:07:35.617895 | orchestrator | 2025-05-03 01:07:35.617912 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-03 01:07:35.617928 | orchestrator | Saturday 03 May 2025 01:04:18 +0000 (0:00:00.215) 0:00:00.741 ********** 2025-05-03 01:07:35.617945 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:07:35.617961 | orchestrator | 2025-05-03 01:07:35.617977 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-03 01:07:35.617993 | orchestrator | Saturday 03 May 2025 01:04:19 +0000 (0:00:00.579) 0:00:01.321 ********** 2025-05-03 01:07:35.618009 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-03 01:07:35.618079 | orchestrator | 2025-05-03 01:07:35.618096 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-03 01:07:35.618110 | orchestrator | Saturday 03 May 2025 01:04:22 +0000 (0:00:03.328) 0:00:04.650 ********** 2025-05-03 01:07:35.618124 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-03 01:07:35.618138 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-03 01:07:35.618153 | orchestrator | 2025-05-03 01:07:35.618167 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-03 01:07:35.618181 | orchestrator | Saturday 03 May 2025 01:04:29 +0000 (0:00:06.500) 0:00:11.151 ********** 2025-05-03 01:07:35.618195 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:07:35.618210 | orchestrator | 2025-05-03 01:07:35.618224 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-03 01:07:35.618237 | orchestrator | Saturday 03 May 2025 01:04:32 +0000 (0:00:03.231) 0:00:14.383 ********** 2025-05-03 01:07:35.618251 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:07:35.618274 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-03 01:07:35.618300 | orchestrator | 2025-05-03 01:07:35.618323 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-03 01:07:35.618348 | orchestrator | Saturday 03 May 2025 01:04:36 +0000 (0:00:03.852) 0:00:18.235 ********** 2025-05-03 01:07:35.618373 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:07:35.618429 | orchestrator | 2025-05-03 01:07:35.618446 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-03 01:07:35.618460 | orchestrator | Saturday 03 May 2025 01:04:39 +0000 (0:00:03.089) 0:00:21.324 ********** 2025-05-03 01:07:35.618474 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-03 01:07:35.618488 | orchestrator | 2025-05-03 01:07:35.618502 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-03 01:07:35.618516 | orchestrator | Saturday 03 May 2025 01:04:43 +0000 (0:00:03.979) 0:00:25.303 ********** 2025-05-03 01:07:35.618551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.618583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.618600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.618640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.618658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.618704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.618731 | orchestrator | 2025-05-03 01:07:35.618756 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-03 01:07:35.618780 | orchestrator | Saturday 03 May 2025 01:04:47 +0000 (0:00:04.024) 0:00:29.328 ********** 2025-05-03 01:07:35.618805 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:07:35.618831 | orchestrator | 2025-05-03 01:07:35.618855 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-03 01:07:35.618880 | orchestrator | Saturday 03 May 2025 01:04:48 +0000 (0:00:00.508) 0:00:29.837 ********** 2025-05-03 01:07:35.618905 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:35.618929 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:07:35.618954 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:07:35.618976 | orchestrator | 2025-05-03 01:07:35.618991 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-03 01:07:35.619005 | orchestrator | Saturday 03 May 2025 01:04:59 +0000 (0:00:11.810) 0:00:41.648 ********** 2025-05-03 01:07:35.619026 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:35.619041 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:35.619063 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:35.619078 | orchestrator | 2025-05-03 01:07:35.619092 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-03 01:07:35.619106 | orchestrator | Saturday 03 May 2025 01:05:02 +0000 (0:00:02.931) 0:00:44.580 ********** 2025-05-03 01:07:35.619119 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:35.619133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:35.619148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-03 01:07:35.619162 | orchestrator | 2025-05-03 01:07:35.619175 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-03 01:07:35.619189 | orchestrator | Saturday 03 May 2025 01:05:04 +0000 (0:00:01.515) 0:00:46.095 ********** 2025-05-03 01:07:35.619203 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:07:35.619222 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:07:35.619237 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:07:35.619251 | orchestrator | 2025-05-03 01:07:35.619265 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-03 01:07:35.619278 | orchestrator | Saturday 03 May 2025 01:05:04 +0000 (0:00:00.595) 0:00:46.690 ********** 2025-05-03 01:07:35.619292 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.619311 | orchestrator | 2025-05-03 01:07:35.619326 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-03 01:07:35.619339 | orchestrator | Saturday 03 May 2025 01:05:05 +0000 (0:00:00.192) 0:00:46.883 ********** 2025-05-03 01:07:35.619353 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.619367 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.619381 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.619417 | orchestrator | 2025-05-03 01:07:35.619433 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-03 01:07:35.619448 | orchestrator | Saturday 03 May 2025 01:05:05 +0000 (0:00:00.323) 0:00:47.206 ********** 2025-05-03 01:07:35.619461 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:07:35.619475 | orchestrator | 2025-05-03 01:07:35.619489 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-03 01:07:35.619504 | orchestrator | Saturday 03 May 2025 01:05:06 +0000 (0:00:00.792) 0:00:47.998 ********** 2025-05-03 01:07:35.619530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.619555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.619580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.619596 | orchestrator | 2025-05-03 01:07:35.619610 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-03 01:07:35.619624 | orchestrator | Saturday 03 May 2025 01:05:10 +0000 (0:00:04.283) 0:00:52.282 ********** 2025-05-03 01:07:35.619645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 01:07:35.619660 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.619683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 01:07:35.619699 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.619714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 01:07:35.619735 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.619749 | orchestrator | 2025-05-03 01:07:35.619764 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-03 01:07:35.619778 | orchestrator | Saturday 03 May 2025 01:05:15 +0000 (0:00:04.652) 0:00:56.934 ********** 2025-05-03 01:07:35.619799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 01:07:35.619815 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.619830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 01:07:35.619851 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.619866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-03 01:07:35.619881 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.619895 | orchestrator | 2025-05-03 01:07:35.619909 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-03 01:07:35.619927 | orchestrator | Saturday 03 May 2025 01:05:19 +0000 (0:00:04.171) 0:01:01.106 ********** 2025-05-03 01:07:35.619950 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.619982 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.620013 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.620034 | orchestrator | 2025-05-03 01:07:35.620066 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-03 01:07:35.620088 | orchestrator | Saturday 03 May 2025 01:05:23 +0000 (0:00:03.855) 0:01:04.961 ********** 2025-05-03 01:07:35.620110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.620146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.620182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.620224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.620261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.620297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.620322 | orchestrator | 2025-05-03 01:07:35.620346 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-03 01:07:35.620370 | orchestrator | Saturday 03 May 2025 01:05:29 +0000 (0:00:06.843) 0:01:11.806 ********** 2025-05-03 01:07:35.620471 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:35.620500 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:07:35.620525 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:07:35.620548 | orchestrator | 2025-05-03 01:07:35.620572 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-03 01:07:35.620595 | orchestrator | Saturday 03 May 2025 01:05:42 +0000 (0:00:12.339) 0:01:24.145 ********** 2025-05-03 01:07:35.620617 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.620651 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.620674 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.620697 | orchestrator | 2025-05-03 01:07:35.620720 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-03 01:07:35.620743 | orchestrator | Saturday 03 May 2025 01:05:57 +0000 (0:00:15.196) 0:01:39.342 ********** 2025-05-03 01:07:35.620767 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.620790 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.620813 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.620835 | orchestrator | 2025-05-03 01:07:35.620858 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-03 01:07:35.620889 | orchestrator | Saturday 03 May 2025 01:06:04 +0000 (0:00:06.708) 0:01:46.050 ********** 2025-05-03 01:07:35.620913 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.620936 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.620968 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.620989 | orchestrator | 2025-05-03 01:07:35.621011 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-03 01:07:35.621032 | orchestrator | Saturday 03 May 2025 01:06:10 +0000 (0:00:05.987) 0:01:52.037 ********** 2025-05-03 01:07:35.621052 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.621083 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.621105 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.621126 | orchestrator | 2025-05-03 01:07:35.621147 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-03 01:07:35.621168 | orchestrator | Saturday 03 May 2025 01:06:17 +0000 (0:00:07.219) 0:01:59.257 ********** 2025-05-03 01:07:35.621189 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.621209 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.621229 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.621249 | orchestrator | 2025-05-03 01:07:35.621269 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-03 01:07:35.621288 | orchestrator | Saturday 03 May 2025 01:06:17 +0000 (0:00:00.394) 0:01:59.651 ********** 2025-05-03 01:07:35.621306 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-03 01:07:35.621329 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.621348 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-03 01:07:35.621368 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.621388 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-03 01:07:35.621435 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.621458 | orchestrator | 2025-05-03 01:07:35.621479 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-03 01:07:35.621501 | orchestrator | Saturday 03 May 2025 01:06:20 +0000 (0:00:02.943) 0:02:02.594 ********** 2025-05-03 01:07:35.621524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.621562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.621598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.621629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.621662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-03 01:07:35.621685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-03 01:07:35.621716 | orchestrator | 2025-05-03 01:07:35.621737 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-03 01:07:35.621758 | orchestrator | Saturday 03 May 2025 01:06:24 +0000 (0:00:03.533) 0:02:06.128 ********** 2025-05-03 01:07:35.621779 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:07:35.621800 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:07:35.621822 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:07:35.621843 | orchestrator | 2025-05-03 01:07:35.621873 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-03 01:07:35.621895 | orchestrator | Saturday 03 May 2025 01:06:24 +0000 (0:00:00.413) 0:02:06.541 ********** 2025-05-03 01:07:35.621917 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:35.621938 | orchestrator | 2025-05-03 01:07:35.621960 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-03 01:07:35.621981 | orchestrator | Saturday 03 May 2025 01:06:26 +0000 (0:00:02.213) 0:02:08.755 ********** 2025-05-03 01:07:35.622003 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:35.622062 | orchestrator | 2025-05-03 01:07:35.622086 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-03 01:07:35.622108 | orchestrator | Saturday 03 May 2025 01:06:29 +0000 (0:00:02.226) 0:02:10.982 ********** 2025-05-03 01:07:35.622129 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:35.622150 | orchestrator | 2025-05-03 01:07:35.622172 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-03 01:07:35.622194 | orchestrator | Saturday 03 May 2025 01:06:31 +0000 (0:00:02.115) 0:02:13.098 ********** 2025-05-03 01:07:35.622215 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:35.622236 | orchestrator | 2025-05-03 01:07:35.622258 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-03 01:07:35.622279 | orchestrator | Saturday 03 May 2025 01:06:57 +0000 (0:00:25.931) 0:02:39.029 ********** 2025-05-03 01:07:35.622298 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:35.622318 | orchestrator | 2025-05-03 01:07:35.622339 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-03 01:07:35.622361 | orchestrator | Saturday 03 May 2025 01:06:59 +0000 (0:00:01.905) 0:02:40.934 ********** 2025-05-03 01:07:35.622381 | orchestrator | 2025-05-03 01:07:35.622425 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-03 01:07:35.622448 | orchestrator | Saturday 03 May 2025 01:06:59 +0000 (0:00:00.069) 0:02:41.003 ********** 2025-05-03 01:07:35.622469 | orchestrator | 2025-05-03 01:07:35.622491 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-03 01:07:35.622512 | orchestrator | Saturday 03 May 2025 01:06:59 +0000 (0:00:00.059) 0:02:41.063 ********** 2025-05-03 01:07:35.622532 | orchestrator | 2025-05-03 01:07:35.622552 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-03 01:07:35.622571 | orchestrator | Saturday 03 May 2025 01:06:59 +0000 (0:00:00.711) 0:02:41.774 ********** 2025-05-03 01:07:35.622593 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:07:35.622613 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:07:35.622634 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:07:35.622666 | orchestrator | 2025-05-03 01:07:35.622686 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:07:35.622709 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-03 01:07:35.622730 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-03 01:07:35.622750 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-03 01:07:35.622769 | orchestrator | 2025-05-03 01:07:35.622790 | orchestrator | 2025-05-03 01:07:35.622812 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:07:35.622832 | orchestrator | Saturday 03 May 2025 01:07:32 +0000 (0:00:32.670) 0:03:14.444 ********** 2025-05-03 01:07:35.622863 | orchestrator | =============================================================================== 2025-05-03 01:07:35.622885 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.67s 2025-05-03 01:07:35.622906 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.93s 2025-05-03 01:07:35.622928 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 15.20s 2025-05-03 01:07:35.622949 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 12.34s 2025-05-03 01:07:35.622971 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 11.81s 2025-05-03 01:07:35.622993 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 7.22s 2025-05-03 01:07:35.623015 | orchestrator | glance : Copying over config.json files for services -------------------- 6.84s 2025-05-03 01:07:35.623037 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.71s 2025-05-03 01:07:35.623059 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.50s 2025-05-03 01:07:35.623080 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.99s 2025-05-03 01:07:35.623102 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.65s 2025-05-03 01:07:35.623123 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.28s 2025-05-03 01:07:35.623146 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.17s 2025-05-03 01:07:35.623166 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.02s 2025-05-03 01:07:35.623188 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.98s 2025-05-03 01:07:35.623210 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.86s 2025-05-03 01:07:35.623232 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.85s 2025-05-03 01:07:35.623253 | orchestrator | glance : Check glance containers ---------------------------------------- 3.53s 2025-05-03 01:07:35.623275 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.33s 2025-05-03 01:07:35.623310 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.23s 2025-05-03 01:07:38.655773 | orchestrator | 2025-05-03 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:38.655895 | orchestrator | 2025-05-03 01:07:38 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:38.657184 | orchestrator | 2025-05-03 01:07:38 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:07:38.657220 | orchestrator | 2025-05-03 01:07:38 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:38.657792 | orchestrator | 2025-05-03 01:07:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:38.659010 | orchestrator | 2025-05-03 01:07:38 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:07:41.707860 | orchestrator | 2025-05-03 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:41.708018 | orchestrator | 2025-05-03 01:07:41 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:41.709823 | orchestrator | 2025-05-03 01:07:41 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:07:41.711455 | orchestrator | 2025-05-03 01:07:41 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:41.713709 | orchestrator | 2025-05-03 01:07:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:41.715856 | orchestrator | 2025-05-03 01:07:41 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:07:41.716223 | orchestrator | 2025-05-03 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:44.763003 | orchestrator | 2025-05-03 01:07:44 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:44.763607 | orchestrator | 2025-05-03 01:07:44 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:07:44.764599 | orchestrator | 2025-05-03 01:07:44 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:44.765535 | orchestrator | 2025-05-03 01:07:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:44.768451 | orchestrator | 2025-05-03 01:07:44 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:07:47.820155 | orchestrator | 2025-05-03 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:47.820305 | orchestrator | 2025-05-03 01:07:47 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:47.821313 | orchestrator | 2025-05-03 01:07:47 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:07:47.821749 | orchestrator | 2025-05-03 01:07:47 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:47.821801 | orchestrator | 2025-05-03 01:07:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:47.823506 | orchestrator | 2025-05-03 01:07:47 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:07:50.879023 | orchestrator | 2025-05-03 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:50.879141 | orchestrator | 2025-05-03 01:07:50 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:50.880789 | orchestrator | 2025-05-03 01:07:50 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:07:50.882252 | orchestrator | 2025-05-03 01:07:50 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:50.883959 | orchestrator | 2025-05-03 01:07:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:50.885214 | orchestrator | 2025-05-03 01:07:50 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:07:53.929689 | orchestrator | 2025-05-03 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:53.929837 | orchestrator | 2025-05-03 01:07:53 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:53.930762 | orchestrator | 2025-05-03 01:07:53 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:07:53.933129 | orchestrator | 2025-05-03 01:07:53 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:53.934590 | orchestrator | 2025-05-03 01:07:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:53.937456 | orchestrator | 2025-05-03 01:07:53 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:07:53.937859 | orchestrator | 2025-05-03 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:07:56.990354 | orchestrator | 2025-05-03 01:07:56 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:07:56.994247 | orchestrator | 2025-05-03 01:07:56 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:07:56.995926 | orchestrator | 2025-05-03 01:07:56 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:07:56.997917 | orchestrator | 2025-05-03 01:07:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:07:56.999574 | orchestrator | 2025-05-03 01:07:56 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:00.045068 | orchestrator | 2025-05-03 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:00.045208 | orchestrator | 2025-05-03 01:08:00 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:00.046612 | orchestrator | 2025-05-03 01:08:00 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:00.047543 | orchestrator | 2025-05-03 01:08:00 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:00.048245 | orchestrator | 2025-05-03 01:08:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:00.050171 | orchestrator | 2025-05-03 01:08:00 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:03.092084 | orchestrator | 2025-05-03 01:08:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:03.092222 | orchestrator | 2025-05-03 01:08:03 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:03.093409 | orchestrator | 2025-05-03 01:08:03 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:03.095543 | orchestrator | 2025-05-03 01:08:03 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:03.096449 | orchestrator | 2025-05-03 01:08:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:03.097863 | orchestrator | 2025-05-03 01:08:03 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:06.148110 | orchestrator | 2025-05-03 01:08:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:06.148251 | orchestrator | 2025-05-03 01:08:06 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:06.149647 | orchestrator | 2025-05-03 01:08:06 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:06.151002 | orchestrator | 2025-05-03 01:08:06 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:06.153546 | orchestrator | 2025-05-03 01:08:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:06.155605 | orchestrator | 2025-05-03 01:08:06 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:06.156024 | orchestrator | 2025-05-03 01:08:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:09.208565 | orchestrator | 2025-05-03 01:08:09 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:09.211387 | orchestrator | 2025-05-03 01:08:09 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:09.212545 | orchestrator | 2025-05-03 01:08:09 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:09.214392 | orchestrator | 2025-05-03 01:08:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:09.215924 | orchestrator | 2025-05-03 01:08:09 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:12.272958 | orchestrator | 2025-05-03 01:08:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:12.273134 | orchestrator | 2025-05-03 01:08:12 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:12.274962 | orchestrator | 2025-05-03 01:08:12 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:12.277645 | orchestrator | 2025-05-03 01:08:12 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:12.278224 | orchestrator | 2025-05-03 01:08:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:12.279922 | orchestrator | 2025-05-03 01:08:12 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:15.331668 | orchestrator | 2025-05-03 01:08:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:15.331840 | orchestrator | 2025-05-03 01:08:15 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:15.332016 | orchestrator | 2025-05-03 01:08:15 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:15.333383 | orchestrator | 2025-05-03 01:08:15 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:15.334681 | orchestrator | 2025-05-03 01:08:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:15.335156 | orchestrator | 2025-05-03 01:08:15 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:15.335868 | orchestrator | 2025-05-03 01:08:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:18.383070 | orchestrator | 2025-05-03 01:08:18 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:18.384177 | orchestrator | 2025-05-03 01:08:18 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:18.385478 | orchestrator | 2025-05-03 01:08:18 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:18.386768 | orchestrator | 2025-05-03 01:08:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:18.387860 | orchestrator | 2025-05-03 01:08:18 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:21.435810 | orchestrator | 2025-05-03 01:08:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:21.435986 | orchestrator | 2025-05-03 01:08:21 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:21.436558 | orchestrator | 2025-05-03 01:08:21 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:21.436591 | orchestrator | 2025-05-03 01:08:21 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:21.436616 | orchestrator | 2025-05-03 01:08:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:21.437478 | orchestrator | 2025-05-03 01:08:21 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:24.492809 | orchestrator | 2025-05-03 01:08:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:24.493007 | orchestrator | 2025-05-03 01:08:24 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:24.493676 | orchestrator | 2025-05-03 01:08:24 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:24.495576 | orchestrator | 2025-05-03 01:08:24 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:24.497287 | orchestrator | 2025-05-03 01:08:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:24.499082 | orchestrator | 2025-05-03 01:08:24 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:27.555598 | orchestrator | 2025-05-03 01:08:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:27.555783 | orchestrator | 2025-05-03 01:08:27 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:27.556798 | orchestrator | 2025-05-03 01:08:27 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:27.560619 | orchestrator | 2025-05-03 01:08:27 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:27.562916 | orchestrator | 2025-05-03 01:08:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:27.564592 | orchestrator | 2025-05-03 01:08:27 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state STARTED 2025-05-03 01:08:27.564892 | orchestrator | 2025-05-03 01:08:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:30.611045 | orchestrator | 2025-05-03 01:08:30 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:30.612804 | orchestrator | 2025-05-03 01:08:30 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:30.615669 | orchestrator | 2025-05-03 01:08:30 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:30.621526 | orchestrator | 2025-05-03 01:08:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:30.622638 | orchestrator | 2025-05-03 01:08:30 | INFO  | Task 383f9c04-7522-4007-9ec5-a8b8dcad319a is in state SUCCESS 2025-05-03 01:08:33.685164 | orchestrator | 2025-05-03 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:33.685303 | orchestrator | 2025-05-03 01:08:33 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:33.686907 | orchestrator | 2025-05-03 01:08:33 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:33.689172 | orchestrator | 2025-05-03 01:08:33 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:33.691114 | orchestrator | 2025-05-03 01:08:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:33.691244 | orchestrator | 2025-05-03 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:36.738717 | orchestrator | 2025-05-03 01:08:36 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:36.739793 | orchestrator | 2025-05-03 01:08:36 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:36.741787 | orchestrator | 2025-05-03 01:08:36 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:36.743264 | orchestrator | 2025-05-03 01:08:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:39.784902 | orchestrator | 2025-05-03 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:39.785064 | orchestrator | 2025-05-03 01:08:39 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:39.785890 | orchestrator | 2025-05-03 01:08:39 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:39.787450 | orchestrator | 2025-05-03 01:08:39 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:39.788939 | orchestrator | 2025-05-03 01:08:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:42.839657 | orchestrator | 2025-05-03 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:42.839793 | orchestrator | 2025-05-03 01:08:42 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:42.842139 | orchestrator | 2025-05-03 01:08:42 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:42.844218 | orchestrator | 2025-05-03 01:08:42 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:42.847378 | orchestrator | 2025-05-03 01:08:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:42.847899 | orchestrator | 2025-05-03 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:45.904710 | orchestrator | 2025-05-03 01:08:45 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:45.907445 | orchestrator | 2025-05-03 01:08:45 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:45.908787 | orchestrator | 2025-05-03 01:08:45 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:45.911154 | orchestrator | 2025-05-03 01:08:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:48.963230 | orchestrator | 2025-05-03 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:48.963419 | orchestrator | 2025-05-03 01:08:48 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:48.963970 | orchestrator | 2025-05-03 01:08:48 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:48.968879 | orchestrator | 2025-05-03 01:08:48 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:48.970077 | orchestrator | 2025-05-03 01:08:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:52.022981 | orchestrator | 2025-05-03 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:52.023265 | orchestrator | 2025-05-03 01:08:52 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:52.024961 | orchestrator | 2025-05-03 01:08:52 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:52.025086 | orchestrator | 2025-05-03 01:08:52 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:52.026120 | orchestrator | 2025-05-03 01:08:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:55.084926 | orchestrator | 2025-05-03 01:08:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:55.085063 | orchestrator | 2025-05-03 01:08:55 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:55.085420 | orchestrator | 2025-05-03 01:08:55 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:55.088985 | orchestrator | 2025-05-03 01:08:55 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:55.089796 | orchestrator | 2025-05-03 01:08:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:08:55.089958 | orchestrator | 2025-05-03 01:08:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:08:58.152161 | orchestrator | 2025-05-03 01:08:58 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:08:58.152837 | orchestrator | 2025-05-03 01:08:58 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:08:58.152885 | orchestrator | 2025-05-03 01:08:58 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:08:58.154975 | orchestrator | 2025-05-03 01:08:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:01.209792 | orchestrator | 2025-05-03 01:08:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:01.209932 | orchestrator | 2025-05-03 01:09:01 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:09:01.210252 | orchestrator | 2025-05-03 01:09:01 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:09:01.211376 | orchestrator | 2025-05-03 01:09:01 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:01.213951 | orchestrator | 2025-05-03 01:09:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:01.214544 | orchestrator | 2025-05-03 01:09:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:04.267133 | orchestrator | 2025-05-03 01:09:04 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:09:04.269879 | orchestrator | 2025-05-03 01:09:04 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:09:04.271778 | orchestrator | 2025-05-03 01:09:04 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:04.273554 | orchestrator | 2025-05-03 01:09:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:07.318506 | orchestrator | 2025-05-03 01:09:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:07.318643 | orchestrator | 2025-05-03 01:09:07 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:09:07.319182 | orchestrator | 2025-05-03 01:09:07 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:09:07.320971 | orchestrator | 2025-05-03 01:09:07 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:07.322258 | orchestrator | 2025-05-03 01:09:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:07.322518 | orchestrator | 2025-05-03 01:09:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:10.372418 | orchestrator | 2025-05-03 01:09:10 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:09:10.374482 | orchestrator | 2025-05-03 01:09:10 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:09:10.377010 | orchestrator | 2025-05-03 01:09:10 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:10.379576 | orchestrator | 2025-05-03 01:09:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:10.380331 | orchestrator | 2025-05-03 01:09:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:13.432075 | orchestrator | 2025-05-03 01:09:13 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:09:13.435974 | orchestrator | 2025-05-03 01:09:13 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:09:13.438157 | orchestrator | 2025-05-03 01:09:13 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:13.440081 | orchestrator | 2025-05-03 01:09:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:16.490617 | orchestrator | 2025-05-03 01:09:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:16.490757 | orchestrator | 2025-05-03 01:09:16 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state STARTED 2025-05-03 01:09:16.492094 | orchestrator | 2025-05-03 01:09:16 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:09:16.493489 | orchestrator | 2025-05-03 01:09:16 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:16.494776 | orchestrator | 2025-05-03 01:09:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:19.533805 | orchestrator | 2025-05-03 01:09:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:19.533945 | orchestrator | 2025-05-03 01:09:19 | INFO  | Task d6360f7b-1de4-4286-965c-870480e8c4a4 is in state SUCCESS 2025-05-03 01:09:19.535199 | orchestrator | 2025-05-03 01:09:19 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:09:19.537505 | orchestrator | 2025-05-03 01:09:19 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:19.539629 | orchestrator | 2025-05-03 01:09:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:19.539916 | orchestrator | 2025-05-03 01:09:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:22.595018 | orchestrator | 2025-05-03 01:09:22 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state STARTED 2025-05-03 01:09:22.596337 | orchestrator | 2025-05-03 01:09:22 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:22.597740 | orchestrator | 2025-05-03 01:09:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:22.598343 | orchestrator | 2025-05-03 01:09:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:25.648677 | orchestrator | 2025-05-03 01:09:25 | INFO  | Task b16e1e07-d4da-4c3e-9edd-ff6766c56b75 is in state SUCCESS 2025-05-03 01:09:25.649693 | orchestrator | 2025-05-03 01:09:25.649735 | orchestrator | 2025-05-03 01:09:25.649749 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:09:25.649763 | orchestrator | 2025-05-03 01:09:25.649776 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:09:25.649789 | orchestrator | Saturday 03 May 2025 01:07:34 +0000 (0:00:00.324) 0:00:00.324 ********** 2025-05-03 01:09:25.649802 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:09:25.649816 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:09:25.649829 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:09:25.649841 | orchestrator | 2025-05-03 01:09:25.649854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:09:25.649866 | orchestrator | Saturday 03 May 2025 01:07:34 +0000 (0:00:00.414) 0:00:00.739 ********** 2025-05-03 01:09:25.649894 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-03 01:09:25.649918 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-03 01:09:25.649931 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-03 01:09:25.649943 | orchestrator | 2025-05-03 01:09:25.649956 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-03 01:09:25.649968 | orchestrator | 2025-05-03 01:09:25.649981 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-03 01:09:25.649993 | orchestrator | Saturday 03 May 2025 01:07:34 +0000 (0:00:00.301) 0:00:01.040 ********** 2025-05-03 01:09:25.650077 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:09:25.650094 | orchestrator | 2025-05-03 01:09:25.650107 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-03 01:09:25.650119 | orchestrator | Saturday 03 May 2025 01:07:35 +0000 (0:00:00.829) 0:00:01.870 ********** 2025-05-03 01:09:25.650133 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-03 01:09:25.650145 | orchestrator | 2025-05-03 01:09:25.650171 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-03 01:09:25.650184 | orchestrator | Saturday 03 May 2025 01:07:39 +0000 (0:00:03.423) 0:00:05.293 ********** 2025-05-03 01:09:25.650197 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-03 01:09:25.650209 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-03 01:09:25.650222 | orchestrator | 2025-05-03 01:09:25.650234 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-03 01:09:25.650268 | orchestrator | Saturday 03 May 2025 01:07:45 +0000 (0:00:06.364) 0:00:11.658 ********** 2025-05-03 01:09:25.650281 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:09:25.650294 | orchestrator | 2025-05-03 01:09:25.650309 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-03 01:09:25.650323 | orchestrator | Saturday 03 May 2025 01:07:48 +0000 (0:00:03.432) 0:00:15.091 ********** 2025-05-03 01:09:25.650338 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:09:25.650352 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-03 01:09:25.650367 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-03 01:09:25.650381 | orchestrator | 2025-05-03 01:09:25.650396 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-03 01:09:25.650410 | orchestrator | Saturday 03 May 2025 01:07:56 +0000 (0:00:07.944) 0:00:23.036 ********** 2025-05-03 01:09:25.650424 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:09:25.650437 | orchestrator | 2025-05-03 01:09:25.650450 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-03 01:09:25.650463 | orchestrator | Saturday 03 May 2025 01:08:00 +0000 (0:00:03.332) 0:00:26.369 ********** 2025-05-03 01:09:25.650475 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-03 01:09:25.650488 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-03 01:09:25.650500 | orchestrator | 2025-05-03 01:09:25.650513 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-03 01:09:25.650525 | orchestrator | Saturday 03 May 2025 01:08:07 +0000 (0:00:07.616) 0:00:33.985 ********** 2025-05-03 01:09:25.650538 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-03 01:09:25.650550 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-03 01:09:25.650563 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-03 01:09:25.651043 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-03 01:09:25.651066 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-03 01:09:25.651082 | orchestrator | 2025-05-03 01:09:25.651095 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-03 01:09:25.651107 | orchestrator | Saturday 03 May 2025 01:08:23 +0000 (0:00:15.308) 0:00:49.293 ********** 2025-05-03 01:09:25.651119 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:09:25.651132 | orchestrator | 2025-05-03 01:09:25.651144 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-03 01:09:25.651157 | orchestrator | Saturday 03 May 2025 01:08:23 +0000 (0:00:00.813) 0:00:50.107 ********** 2025-05-03 01:09:25.651195 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-03 01:09:25.651212 | orchestrator | 2025-05-03 01:09:25.651225 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:09:25.651405 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-03 01:09:25.652109 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:09:25.652132 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:09:25.652145 | orchestrator | 2025-05-03 01:09:25.652157 | orchestrator | 2025-05-03 01:09:25.652170 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:09:25.652182 | orchestrator | Saturday 03 May 2025 01:08:27 +0000 (0:00:03.240) 0:00:53.347 ********** 2025-05-03 01:09:25.652195 | orchestrator | =============================================================================== 2025-05-03 01:09:25.652207 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.31s 2025-05-03 01:09:25.652219 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.94s 2025-05-03 01:09:25.652232 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.62s 2025-05-03 01:09:25.652278 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.36s 2025-05-03 01:09:25.652293 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.43s 2025-05-03 01:09:25.652306 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.42s 2025-05-03 01:09:25.652318 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.33s 2025-05-03 01:09:25.652330 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.24s 2025-05-03 01:09:25.652343 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.83s 2025-05-03 01:09:25.652355 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.81s 2025-05-03 01:09:25.652368 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-05-03 01:09:25.652380 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2025-05-03 01:09:25.652393 | orchestrator | 2025-05-03 01:09:25.652405 | orchestrator | 2025-05-03 01:09:25.652422 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:09:25.652435 | orchestrator | 2025-05-03 01:09:25.652447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:09:25.652460 | orchestrator | Saturday 03 May 2025 01:06:42 +0000 (0:00:00.260) 0:00:00.260 ********** 2025-05-03 01:09:25.652472 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:09:25.652485 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:09:25.652498 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:09:25.652510 | orchestrator | 2025-05-03 01:09:25.652523 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:09:25.652535 | orchestrator | Saturday 03 May 2025 01:06:42 +0000 (0:00:00.439) 0:00:00.699 ********** 2025-05-03 01:09:25.652548 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-03 01:09:25.652561 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-03 01:09:25.652573 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-03 01:09:25.652586 | orchestrator | 2025-05-03 01:09:25.652610 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-03 01:09:25.652622 | orchestrator | 2025-05-03 01:09:25.652634 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-03 01:09:25.652647 | orchestrator | Saturday 03 May 2025 01:06:43 +0000 (0:00:00.524) 0:00:01.224 ********** 2025-05-03 01:09:25.652659 | orchestrator | 2025-05-03 01:09:25.652672 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-05-03 01:09:25.652684 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:09:25.652697 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:09:25.652717 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:09:25.652734 | orchestrator | 2025-05-03 01:09:25.652749 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:09:25.652764 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:09:25.652779 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:09:25.652793 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:09:25.652807 | orchestrator | 2025-05-03 01:09:25.652821 | orchestrator | 2025-05-03 01:09:25.652835 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:09:25.652850 | orchestrator | Saturday 03 May 2025 01:09:18 +0000 (0:02:34.884) 0:02:36.109 ********** 2025-05-03 01:09:25.652864 | orchestrator | =============================================================================== 2025-05-03 01:09:25.652878 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 154.88s 2025-05-03 01:09:25.652893 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-05-03 01:09:25.652906 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-05-03 01:09:25.652921 | orchestrator | 2025-05-03 01:09:25.652934 | orchestrator | 2025-05-03 01:09:25.652949 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:09:25.652963 | orchestrator | 2025-05-03 01:09:25.653016 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:09:25.653033 | orchestrator | Saturday 03 May 2025 01:07:35 +0000 (0:00:00.299) 0:00:00.299 ********** 2025-05-03 01:09:25.653047 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:09:25.653061 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:09:25.653074 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:09:25.653086 | orchestrator | 2025-05-03 01:09:25.653099 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:09:25.653111 | orchestrator | Saturday 03 May 2025 01:07:36 +0000 (0:00:00.319) 0:00:00.618 ********** 2025-05-03 01:09:25.653124 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-03 01:09:25.653136 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-03 01:09:25.653149 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-03 01:09:25.653161 | orchestrator | 2025-05-03 01:09:25.653173 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-03 01:09:25.653186 | orchestrator | 2025-05-03 01:09:25.653198 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-03 01:09:25.653210 | orchestrator | Saturday 03 May 2025 01:07:36 +0000 (0:00:00.247) 0:00:00.866 ********** 2025-05-03 01:09:25.653223 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:09:25.653235 | orchestrator | 2025-05-03 01:09:25.653310 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-03 01:09:25.653325 | orchestrator | Saturday 03 May 2025 01:07:37 +0000 (0:00:00.544) 0:00:01.410 ********** 2025-05-03 01:09:25.653339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.653366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.653380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.653394 | orchestrator | 2025-05-03 01:09:25.653406 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-03 01:09:25.653419 | orchestrator | Saturday 03 May 2025 01:07:37 +0000 (0:00:00.798) 0:00:02.208 ********** 2025-05-03 01:09:25.653432 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-03 01:09:25.653449 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-03 01:09:25.653462 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 01:09:25.653474 | orchestrator | 2025-05-03 01:09:25.653486 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-03 01:09:25.653499 | orchestrator | Saturday 03 May 2025 01:07:38 +0000 (0:00:00.455) 0:00:02.664 ********** 2025-05-03 01:09:25.653511 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:09:25.653524 | orchestrator | 2025-05-03 01:09:25.653536 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-03 01:09:25.653554 | orchestrator | Saturday 03 May 2025 01:07:38 +0000 (0:00:00.498) 0:00:03.163 ********** 2025-05-03 01:09:25.653600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.653616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.653640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.653654 | orchestrator | 2025-05-03 01:09:25.653667 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-03 01:09:25.653679 | orchestrator | Saturday 03 May 2025 01:07:40 +0000 (0:00:01.296) 0:00:04.459 ********** 2025-05-03 01:09:25.653692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 01:09:25.653705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 01:09:25.653718 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:09:25.653731 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:09:25.653770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 01:09:25.653785 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:09:25.653798 | orchestrator | 2025-05-03 01:09:25.653811 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-03 01:09:25.653823 | orchestrator | Saturday 03 May 2025 01:07:40 +0000 (0:00:00.417) 0:00:04.876 ********** 2025-05-03 01:09:25.653841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 01:09:25.653855 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:09:25.653867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 01:09:25.653880 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:09:25.653893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-03 01:09:25.653906 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:09:25.653918 | orchestrator | 2025-05-03 01:09:25.653931 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-03 01:09:25.653943 | orchestrator | Saturday 03 May 2025 01:07:41 +0000 (0:00:00.659) 0:00:05.536 ********** 2025-05-03 01:09:25.653956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.653969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.654008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.654086 | orchestrator | 2025-05-03 01:09:25.654102 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-03 01:09:25.654115 | orchestrator | Saturday 03 May 2025 01:07:42 +0000 (0:00:01.396) 0:00:06.932 ********** 2025-05-03 01:09:25.654128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.654141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.654154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.654166 | orchestrator | 2025-05-03 01:09:25.654179 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-03 01:09:25.654191 | orchestrator | Saturday 03 May 2025 01:07:44 +0000 (0:00:01.592) 0:00:08.525 ********** 2025-05-03 01:09:25.654204 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:09:25.654217 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:09:25.654235 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:09:25.654265 | orchestrator | 2025-05-03 01:09:25.654278 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-03 01:09:25.654291 | orchestrator | Saturday 03 May 2025 01:07:44 +0000 (0:00:00.286) 0:00:08.811 ********** 2025-05-03 01:09:25.654303 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-03 01:09:25.654315 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-03 01:09:25.654328 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-03 01:09:25.654346 | orchestrator | 2025-05-03 01:09:25.654358 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-03 01:09:25.654370 | orchestrator | Saturday 03 May 2025 01:07:45 +0000 (0:00:01.342) 0:00:10.154 ********** 2025-05-03 01:09:25.654383 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-03 01:09:25.654395 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-03 01:09:25.654439 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-03 01:09:25.654453 | orchestrator | 2025-05-03 01:09:25.654466 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-03 01:09:25.654478 | orchestrator | Saturday 03 May 2025 01:07:47 +0000 (0:00:01.387) 0:00:11.541 ********** 2025-05-03 01:09:25.654491 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 01:09:25.654503 | orchestrator | 2025-05-03 01:09:25.654515 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-03 01:09:25.654527 | orchestrator | Saturday 03 May 2025 01:07:47 +0000 (0:00:00.459) 0:00:12.000 ********** 2025-05-03 01:09:25.654540 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-03 01:09:25.654552 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-03 01:09:25.654564 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:09:25.654576 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:09:25.654589 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:09:25.654601 | orchestrator | 2025-05-03 01:09:25.654613 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-03 01:09:25.654626 | orchestrator | Saturday 03 May 2025 01:07:48 +0000 (0:00:00.878) 0:00:12.878 ********** 2025-05-03 01:09:25.654638 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:09:25.654651 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:09:25.654663 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:09:25.654675 | orchestrator | 2025-05-03 01:09:25.654687 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-03 01:09:25.654700 | orchestrator | Saturday 03 May 2025 01:07:49 +0000 (0:00:00.504) 0:00:13.383 ********** 2025-05-03 01:09:25.654712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1329883, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5026248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1329883, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5026248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1329883, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5026248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1329871, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4916246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1329871, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4916246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1329871, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4916246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1329866, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4896245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1329866, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4896245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1329866, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4896245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1329877, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4936247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1329877, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4936247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1329877, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4936247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1329853, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4856246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1329853, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4856246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1329853, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4856246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1329868, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4896245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.654999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1329868, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4896245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1329868, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4896245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1329876, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4936247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1329876, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4936247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1329876, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4936247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1329849, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4846244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1329849, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4846244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1329849, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4846244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1329830, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4786243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1329830, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4786243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1329830, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4786243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1329855, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4866245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1329855, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4866245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1329855, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4866245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1329841, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4826245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1329841, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4826245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1329841, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4826245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1329875, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.4926245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1329875, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.4926245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1329875, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.4926245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1329857, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.4876246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1329857, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.4876246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1329857, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.4876246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1329879, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4946246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1329879, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4946246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1329879, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4946246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1329845, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4836245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1329845, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4836245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1329845, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4836245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1329869, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4906247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1329869, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4906247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1329869, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4906247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1329834, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4816244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1329834, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4816244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1329834, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4816244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1329842, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4836245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1329842, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4836245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1329842, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4836245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1329859, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4886246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1329859, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4886246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1329859, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.4886246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1329961, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5236251, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1329961, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5236251, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1329961, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5236251, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1329955, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.514625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1329955, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.514625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1329955, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.514625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1329996, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5296252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1329996, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5296252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1329996, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5296252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1329916, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5026248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1329916, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5026248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1329916, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5026248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1330001, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5336254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1330001, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5336254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1330001, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5336254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1329984, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5256252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1329984, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5256252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1329984, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5256252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1329986, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5266252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1329986, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5266252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1329986, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5266252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1329920, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5046248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1329920, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5046248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1329920, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5046248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1329959, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.514625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1329959, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.514625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.655989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1329959, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.514625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1330012, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5346253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1330012, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5346253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1330012, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5346253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1329988, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5286252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1329988, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5286252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1329988, 'dev': 129, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746231268.5286252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1329933, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5076249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1329933, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5076249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1329933, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5076249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1329930, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5046248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1329930, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5046248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1329930, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5046248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1329942, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.508625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1329942, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.508625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1329942, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.508625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1329945, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.513625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1329945, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.513625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1329945, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.513625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1330017, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5356255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1330017, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5356255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1330017, 'dev': 129, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746231268.5356255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-03 01:09:25.656262 | orchestrator | 2025-05-03 01:09:25.656273 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-03 01:09:25.656288 | orchestrator | Saturday 03 May 2025 01:08:21 +0000 (0:00:32.882) 0:00:46.265 ********** 2025-05-03 01:09:25.656304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.656315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.656326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-03 01:09:25.656336 | orchestrator | 2025-05-03 01:09:25.656347 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-03 01:09:25.656357 | orchestrator | Saturday 03 May 2025 01:08:22 +0000 (0:00:01.012) 0:00:47.278 ********** 2025-05-03 01:09:25.656368 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:09:25.656378 | orchestrator | 2025-05-03 01:09:25.656388 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-03 01:09:25.656398 | orchestrator | Saturday 03 May 2025 01:08:25 +0000 (0:00:02.554) 0:00:49.833 ********** 2025-05-03 01:09:25.656409 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:09:25.656419 | orchestrator | 2025-05-03 01:09:25.656429 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-03 01:09:25.656439 | orchestrator | Saturday 03 May 2025 01:08:27 +0000 (0:00:02.257) 0:00:52.091 ********** 2025-05-03 01:09:25.656449 | orchestrator | 2025-05-03 01:09:25.656464 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-03 01:09:25.656474 | orchestrator | Saturday 03 May 2025 01:08:27 +0000 (0:00:00.060) 0:00:52.152 ********** 2025-05-03 01:09:25.656485 | orchestrator | 2025-05-03 01:09:25.656495 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-03 01:09:25.656505 | orchestrator | Saturday 03 May 2025 01:08:27 +0000 (0:00:00.056) 0:00:52.208 ********** 2025-05-03 01:09:25.656515 | orchestrator | 2025-05-03 01:09:25.656525 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-03 01:09:25.656535 | orchestrator | Saturday 03 May 2025 01:08:28 +0000 (0:00:00.227) 0:00:52.436 ********** 2025-05-03 01:09:25.656545 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:09:25.656555 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:09:25.656566 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:09:25.656581 | orchestrator | 2025-05-03 01:09:25.656591 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-03 01:09:25.656601 | orchestrator | Saturday 03 May 2025 01:08:34 +0000 (0:00:06.773) 0:00:59.210 ********** 2025-05-03 01:09:25.656611 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:09:25.656622 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:09:25.656632 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-03 01:09:25.656642 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-03 01:09:25.656653 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:09:25.656663 | orchestrator | 2025-05-03 01:09:25.656673 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-03 01:09:25.656683 | orchestrator | Saturday 03 May 2025 01:09:01 +0000 (0:00:26.829) 0:01:26.040 ********** 2025-05-03 01:09:25.656694 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:09:25.656704 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:09:25.656714 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:09:25.656724 | orchestrator | 2025-05-03 01:09:25.656735 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-03 01:09:25.656745 | orchestrator | Saturday 03 May 2025 01:09:16 +0000 (0:00:15.131) 0:01:41.171 ********** 2025-05-03 01:09:25.656755 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:09:25.656765 | orchestrator | 2025-05-03 01:09:25.656776 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-03 01:09:25.656790 | orchestrator | Saturday 03 May 2025 01:09:19 +0000 (0:00:02.322) 0:01:43.494 ********** 2025-05-03 01:09:28.701502 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:09:28.701618 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:09:28.701636 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:09:28.701652 | orchestrator | 2025-05-03 01:09:28.701668 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-03 01:09:28.701684 | orchestrator | Saturday 03 May 2025 01:09:19 +0000 (0:00:00.405) 0:01:43.900 ********** 2025-05-03 01:09:28.701701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-03 01:09:28.701719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-03 01:09:28.701735 | orchestrator | 2025-05-03 01:09:28.701749 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-03 01:09:28.701763 | orchestrator | Saturday 03 May 2025 01:09:21 +0000 (0:00:02.425) 0:01:46.325 ********** 2025-05-03 01:09:28.701777 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:09:28.701791 | orchestrator | 2025-05-03 01:09:28.701805 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:09:28.701820 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:09:28.701836 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:09:28.701851 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-03 01:09:28.701865 | orchestrator | 2025-05-03 01:09:28.701879 | orchestrator | 2025-05-03 01:09:28.701893 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:09:28.701933 | orchestrator | Saturday 03 May 2025 01:09:22 +0000 (0:00:00.378) 0:01:46.703 ********** 2025-05-03 01:09:28.701957 | orchestrator | =============================================================================== 2025-05-03 01:09:28.701988 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 32.88s 2025-05-03 01:09:28.702086 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.83s 2025-05-03 01:09:28.702119 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 15.13s 2025-05-03 01:09:28.702145 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.77s 2025-05-03 01:09:28.702171 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.55s 2025-05-03 01:09:28.702189 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.43s 2025-05-03 01:09:28.702205 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.32s 2025-05-03 01:09:28.702221 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.26s 2025-05-03 01:09:28.702236 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.59s 2025-05-03 01:09:28.702331 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.40s 2025-05-03 01:09:28.702347 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.39s 2025-05-03 01:09:28.702361 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.34s 2025-05-03 01:09:28.702375 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.30s 2025-05-03 01:09:28.702389 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.01s 2025-05-03 01:09:28.702403 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.88s 2025-05-03 01:09:28.702417 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.80s 2025-05-03 01:09:28.702430 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2025-05-03 01:09:28.702444 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.54s 2025-05-03 01:09:28.702458 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.50s 2025-05-03 01:09:28.702472 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.50s 2025-05-03 01:09:28.702486 | orchestrator | 2025-05-03 01:09:25 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:28.702501 | orchestrator | 2025-05-03 01:09:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:28.702515 | orchestrator | 2025-05-03 01:09:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:28.702548 | orchestrator | 2025-05-03 01:09:28 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:28.703107 | orchestrator | 2025-05-03 01:09:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:31.754615 | orchestrator | 2025-05-03 01:09:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:31.754761 | orchestrator | 2025-05-03 01:09:31 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:34.799685 | orchestrator | 2025-05-03 01:09:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:34.799791 | orchestrator | 2025-05-03 01:09:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:34.799838 | orchestrator | 2025-05-03 01:09:34 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:34.802216 | orchestrator | 2025-05-03 01:09:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:37.873623 | orchestrator | 2025-05-03 01:09:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:37.873828 | orchestrator | 2025-05-03 01:09:37 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:37.875427 | orchestrator | 2025-05-03 01:09:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:40.920416 | orchestrator | 2025-05-03 01:09:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:40.920565 | orchestrator | 2025-05-03 01:09:40 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:43.974685 | orchestrator | 2025-05-03 01:09:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:43.974809 | orchestrator | 2025-05-03 01:09:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:43.974846 | orchestrator | 2025-05-03 01:09:43 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:43.975857 | orchestrator | 2025-05-03 01:09:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:47.048738 | orchestrator | 2025-05-03 01:09:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:47.048898 | orchestrator | 2025-05-03 01:09:47 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:50.092649 | orchestrator | 2025-05-03 01:09:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:50.092804 | orchestrator | 2025-05-03 01:09:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:50.092846 | orchestrator | 2025-05-03 01:09:50 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:50.094704 | orchestrator | 2025-05-03 01:09:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:53.147732 | orchestrator | 2025-05-03 01:09:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:53.147876 | orchestrator | 2025-05-03 01:09:53 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:56.209951 | orchestrator | 2025-05-03 01:09:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:56.210132 | orchestrator | 2025-05-03 01:09:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:56.210171 | orchestrator | 2025-05-03 01:09:56 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:56.211236 | orchestrator | 2025-05-03 01:09:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:09:59.263505 | orchestrator | 2025-05-03 01:09:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:09:59.263646 | orchestrator | 2025-05-03 01:09:59 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:09:59.264085 | orchestrator | 2025-05-03 01:09:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:02.323441 | orchestrator | 2025-05-03 01:09:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:02.323623 | orchestrator | 2025-05-03 01:10:02 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:02.324957 | orchestrator | 2025-05-03 01:10:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:05.369495 | orchestrator | 2025-05-03 01:10:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:05.369637 | orchestrator | 2025-05-03 01:10:05 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:05.371377 | orchestrator | 2025-05-03 01:10:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:08.413337 | orchestrator | 2025-05-03 01:10:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:08.413470 | orchestrator | 2025-05-03 01:10:08 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:08.413981 | orchestrator | 2025-05-03 01:10:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:11.449934 | orchestrator | 2025-05-03 01:10:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:11.450138 | orchestrator | 2025-05-03 01:10:11 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:11.451572 | orchestrator | 2025-05-03 01:10:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:11.451988 | orchestrator | 2025-05-03 01:10:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:14.495768 | orchestrator | 2025-05-03 01:10:14 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:14.495981 | orchestrator | 2025-05-03 01:10:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:17.556476 | orchestrator | 2025-05-03 01:10:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:17.556717 | orchestrator | 2025-05-03 01:10:17 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:20.593595 | orchestrator | 2025-05-03 01:10:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:20.593710 | orchestrator | 2025-05-03 01:10:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:20.593744 | orchestrator | 2025-05-03 01:10:20 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:20.595519 | orchestrator | 2025-05-03 01:10:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:20.595770 | orchestrator | 2025-05-03 01:10:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:23.639749 | orchestrator | 2025-05-03 01:10:23 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:23.640430 | orchestrator | 2025-05-03 01:10:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:23.641036 | orchestrator | 2025-05-03 01:10:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:26.696551 | orchestrator | 2025-05-03 01:10:26 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:29.728345 | orchestrator | 2025-05-03 01:10:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:29.728473 | orchestrator | 2025-05-03 01:10:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:29.728511 | orchestrator | 2025-05-03 01:10:29 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:29.728926 | orchestrator | 2025-05-03 01:10:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:32.776942 | orchestrator | 2025-05-03 01:10:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:32.777070 | orchestrator | 2025-05-03 01:10:32 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:32.778849 | orchestrator | 2025-05-03 01:10:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:35.809354 | orchestrator | 2025-05-03 01:10:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:35.809518 | orchestrator | 2025-05-03 01:10:35 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:35.809789 | orchestrator | 2025-05-03 01:10:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:35.809922 | orchestrator | 2025-05-03 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:38.854005 | orchestrator | 2025-05-03 01:10:38 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:38.855696 | orchestrator | 2025-05-03 01:10:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:38.855816 | orchestrator | 2025-05-03 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:41.908131 | orchestrator | 2025-05-03 01:10:41 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:41.908457 | orchestrator | 2025-05-03 01:10:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:44.958494 | orchestrator | 2025-05-03 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:44.958678 | orchestrator | 2025-05-03 01:10:44 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:44.958805 | orchestrator | 2025-05-03 01:10:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:44.958832 | orchestrator | 2025-05-03 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:48.005925 | orchestrator | 2025-05-03 01:10:48 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:51.052228 | orchestrator | 2025-05-03 01:10:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:51.052355 | orchestrator | 2025-05-03 01:10:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:51.052384 | orchestrator | 2025-05-03 01:10:51 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:51.052527 | orchestrator | 2025-05-03 01:10:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:51.052625 | orchestrator | 2025-05-03 01:10:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:54.102316 | orchestrator | 2025-05-03 01:10:54 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:54.103179 | orchestrator | 2025-05-03 01:10:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:57.161338 | orchestrator | 2025-05-03 01:10:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:10:57.161472 | orchestrator | 2025-05-03 01:10:57 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:10:57.161603 | orchestrator | 2025-05-03 01:10:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:10:57.161631 | orchestrator | 2025-05-03 01:10:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:00.218105 | orchestrator | 2025-05-03 01:11:00 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:00.219068 | orchestrator | 2025-05-03 01:11:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:03.272745 | orchestrator | 2025-05-03 01:11:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:03.272908 | orchestrator | 2025-05-03 01:11:03 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:03.273558 | orchestrator | 2025-05-03 01:11:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:06.322004 | orchestrator | 2025-05-03 01:11:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:06.322371 | orchestrator | 2025-05-03 01:11:06 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:06.323797 | orchestrator | 2025-05-03 01:11:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:09.386641 | orchestrator | 2025-05-03 01:11:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:09.386785 | orchestrator | 2025-05-03 01:11:09 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:09.386904 | orchestrator | 2025-05-03 01:11:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:09.386932 | orchestrator | 2025-05-03 01:11:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:12.426497 | orchestrator | 2025-05-03 01:11:12 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:12.428559 | orchestrator | 2025-05-03 01:11:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:15.469809 | orchestrator | 2025-05-03 01:11:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:15.469969 | orchestrator | 2025-05-03 01:11:15 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:15.471033 | orchestrator | 2025-05-03 01:11:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:18.517729 | orchestrator | 2025-05-03 01:11:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:18.517875 | orchestrator | 2025-05-03 01:11:18 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:18.519158 | orchestrator | 2025-05-03 01:11:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:21.569234 | orchestrator | 2025-05-03 01:11:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:21.569376 | orchestrator | 2025-05-03 01:11:21 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:21.570653 | orchestrator | 2025-05-03 01:11:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:21.570768 | orchestrator | 2025-05-03 01:11:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:24.627167 | orchestrator | 2025-05-03 01:11:24 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:24.629076 | orchestrator | 2025-05-03 01:11:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:27.705178 | orchestrator | 2025-05-03 01:11:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:27.705313 | orchestrator | 2025-05-03 01:11:27 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:27.707023 | orchestrator | 2025-05-03 01:11:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:30.754072 | orchestrator | 2025-05-03 01:11:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:30.754312 | orchestrator | 2025-05-03 01:11:30 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:30.754978 | orchestrator | 2025-05-03 01:11:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:33.811192 | orchestrator | 2025-05-03 01:11:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:33.811414 | orchestrator | 2025-05-03 01:11:33 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:36.863469 | orchestrator | 2025-05-03 01:11:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:36.863674 | orchestrator | 2025-05-03 01:11:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:36.863737 | orchestrator | 2025-05-03 01:11:36 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:36.865444 | orchestrator | 2025-05-03 01:11:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:39.909411 | orchestrator | 2025-05-03 01:11:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:39.909545 | orchestrator | 2025-05-03 01:11:39 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:39.910633 | orchestrator | 2025-05-03 01:11:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:42.968786 | orchestrator | 2025-05-03 01:11:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:42.968946 | orchestrator | 2025-05-03 01:11:42 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:42.969294 | orchestrator | 2025-05-03 01:11:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:46.028406 | orchestrator | 2025-05-03 01:11:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:46.028543 | orchestrator | 2025-05-03 01:11:46 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:46.030156 | orchestrator | 2025-05-03 01:11:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:49.072299 | orchestrator | 2025-05-03 01:11:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:49.072523 | orchestrator | 2025-05-03 01:11:49 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:49.072560 | orchestrator | 2025-05-03 01:11:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:49.072878 | orchestrator | 2025-05-03 01:11:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:52.117119 | orchestrator | 2025-05-03 01:11:52 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:52.118250 | orchestrator | 2025-05-03 01:11:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:55.158587 | orchestrator | 2025-05-03 01:11:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:55.158721 | orchestrator | 2025-05-03 01:11:55 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:55.161038 | orchestrator | 2025-05-03 01:11:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:11:58.243484 | orchestrator | 2025-05-03 01:11:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:11:58.243570 | orchestrator | 2025-05-03 01:11:58 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:11:58.244751 | orchestrator | 2025-05-03 01:11:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:01.334446 | orchestrator | 2025-05-03 01:11:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:01.334589 | orchestrator | 2025-05-03 01:12:01 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:01.335663 | orchestrator | 2025-05-03 01:12:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:04.410433 | orchestrator | 2025-05-03 01:12:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:04.410582 | orchestrator | 2025-05-03 01:12:04 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:04.412405 | orchestrator | 2025-05-03 01:12:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:07.460363 | orchestrator | 2025-05-03 01:12:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:07.460503 | orchestrator | 2025-05-03 01:12:07 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:07.462177 | orchestrator | 2025-05-03 01:12:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:10.521330 | orchestrator | 2025-05-03 01:12:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:10.521485 | orchestrator | 2025-05-03 01:12:10 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:10.525902 | orchestrator | 2025-05-03 01:12:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:13.584536 | orchestrator | 2025-05-03 01:12:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:13.584686 | orchestrator | 2025-05-03 01:12:13 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:13.585359 | orchestrator | 2025-05-03 01:12:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:13.585398 | orchestrator | 2025-05-03 01:12:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:16.633202 | orchestrator | 2025-05-03 01:12:16 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:16.634259 | orchestrator | 2025-05-03 01:12:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:19.686011 | orchestrator | 2025-05-03 01:12:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:19.687088 | orchestrator | 2025-05-03 01:12:19 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:19.687826 | orchestrator | 2025-05-03 01:12:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:22.735302 | orchestrator | 2025-05-03 01:12:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:22.735481 | orchestrator | 2025-05-03 01:12:22 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:22.736779 | orchestrator | 2025-05-03 01:12:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:25.784970 | orchestrator | 2025-05-03 01:12:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:25.785155 | orchestrator | 2025-05-03 01:12:25 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:25.786499 | orchestrator | 2025-05-03 01:12:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:28.836232 | orchestrator | 2025-05-03 01:12:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:28.836365 | orchestrator | 2025-05-03 01:12:28 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:28.836575 | orchestrator | 2025-05-03 01:12:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:31.884735 | orchestrator | 2025-05-03 01:12:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:31.884877 | orchestrator | 2025-05-03 01:12:31 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:31.889006 | orchestrator | 2025-05-03 01:12:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:34.925274 | orchestrator | 2025-05-03 01:12:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:34.925440 | orchestrator | 2025-05-03 01:12:34 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:34.927438 | orchestrator | 2025-05-03 01:12:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:37.976185 | orchestrator | 2025-05-03 01:12:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:37.976326 | orchestrator | 2025-05-03 01:12:37 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:37.977908 | orchestrator | 2025-05-03 01:12:37 | INFO  | Task 5b6c36cf-c2c3-4960-a4db-c3514a3ecec0 is in state STARTED 2025-05-03 01:12:37.979459 | orchestrator | 2025-05-03 01:12:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:37.979571 | orchestrator | 2025-05-03 01:12:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:41.038472 | orchestrator | 2025-05-03 01:12:41 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:41.039193 | orchestrator | 2025-05-03 01:12:41 | INFO  | Task 5b6c36cf-c2c3-4960-a4db-c3514a3ecec0 is in state STARTED 2025-05-03 01:12:41.039678 | orchestrator | 2025-05-03 01:12:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:44.095630 | orchestrator | 2025-05-03 01:12:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:44.095772 | orchestrator | 2025-05-03 01:12:44 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:44.098134 | orchestrator | 2025-05-03 01:12:44 | INFO  | Task 5b6c36cf-c2c3-4960-a4db-c3514a3ecec0 is in state STARTED 2025-05-03 01:12:44.098736 | orchestrator | 2025-05-03 01:12:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:47.153415 | orchestrator | 2025-05-03 01:12:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:47.153555 | orchestrator | 2025-05-03 01:12:47 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:47.155323 | orchestrator | 2025-05-03 01:12:47 | INFO  | Task 5b6c36cf-c2c3-4960-a4db-c3514a3ecec0 is in state STARTED 2025-05-03 01:12:47.159099 | orchestrator | 2025-05-03 01:12:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:50.217984 | orchestrator | 2025-05-03 01:12:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:50.218208 | orchestrator | 2025-05-03 01:12:50 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:50.218541 | orchestrator | 2025-05-03 01:12:50 | INFO  | Task 5b6c36cf-c2c3-4960-a4db-c3514a3ecec0 is in state SUCCESS 2025-05-03 01:12:50.219935 | orchestrator | 2025-05-03 01:12:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:53.267329 | orchestrator | 2025-05-03 01:12:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:53.267473 | orchestrator | 2025-05-03 01:12:53 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:53.272140 | orchestrator | 2025-05-03 01:12:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:56.319955 | orchestrator | 2025-05-03 01:12:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:56.320138 | orchestrator | 2025-05-03 01:12:56 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:12:59.350810 | orchestrator | 2025-05-03 01:12:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:12:59.351117 | orchestrator | 2025-05-03 01:12:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:12:59.351181 | orchestrator | 2025-05-03 01:12:59 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:02.388268 | orchestrator | 2025-05-03 01:12:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:02.388371 | orchestrator | 2025-05-03 01:12:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:02.388407 | orchestrator | 2025-05-03 01:13:02 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:05.430535 | orchestrator | 2025-05-03 01:13:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:05.430646 | orchestrator | 2025-05-03 01:13:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:05.430681 | orchestrator | 2025-05-03 01:13:05 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:05.431967 | orchestrator | 2025-05-03 01:13:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:08.478843 | orchestrator | 2025-05-03 01:13:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:08.479020 | orchestrator | 2025-05-03 01:13:08 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:08.481069 | orchestrator | 2025-05-03 01:13:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:11.530405 | orchestrator | 2025-05-03 01:13:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:11.530547 | orchestrator | 2025-05-03 01:13:11 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:11.531280 | orchestrator | 2025-05-03 01:13:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:14.588550 | orchestrator | 2025-05-03 01:13:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:14.588720 | orchestrator | 2025-05-03 01:13:14 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:14.590223 | orchestrator | 2025-05-03 01:13:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:17.645580 | orchestrator | 2025-05-03 01:13:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:17.645726 | orchestrator | 2025-05-03 01:13:17 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:17.646575 | orchestrator | 2025-05-03 01:13:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:17.647026 | orchestrator | 2025-05-03 01:13:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:20.698478 | orchestrator | 2025-05-03 01:13:20 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:20.701132 | orchestrator | 2025-05-03 01:13:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:23.751260 | orchestrator | 2025-05-03 01:13:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:23.751403 | orchestrator | 2025-05-03 01:13:23 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:23.752075 | orchestrator | 2025-05-03 01:13:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:26.803315 | orchestrator | 2025-05-03 01:13:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:26.803460 | orchestrator | 2025-05-03 01:13:26 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:26.804700 | orchestrator | 2025-05-03 01:13:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:29.851560 | orchestrator | 2025-05-03 01:13:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:29.851695 | orchestrator | 2025-05-03 01:13:29 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:29.852132 | orchestrator | 2025-05-03 01:13:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:32.910195 | orchestrator | 2025-05-03 01:13:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:32.910338 | orchestrator | 2025-05-03 01:13:32 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state STARTED 2025-05-03 01:13:32.911215 | orchestrator | 2025-05-03 01:13:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:32.911334 | orchestrator | 2025-05-03 01:13:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:35.966129 | orchestrator | 2025-05-03 01:13:35 | INFO  | Task 75cc4c1c-e46b-4eab-8eec-890312e38de3 is in state SUCCESS 2025-05-03 01:13:35.967832 | orchestrator | 2025-05-03 01:13:35.967881 | orchestrator | None 2025-05-03 01:13:35.967894 | orchestrator | 2025-05-03 01:13:35.967905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-03 01:13:35.967917 | orchestrator | 2025-05-03 01:13:35.967927 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-03 01:13:35.967964 | orchestrator | Saturday 03 May 2025 01:05:22 +0000 (0:00:00.383) 0:00:00.383 ********** 2025-05-03 01:13:35.967976 | orchestrator | changed: [testbed-manager] 2025-05-03 01:13:35.967989 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.968000 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:13:35.968012 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:13:35.968023 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.968035 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.968047 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.968058 | orchestrator | 2025-05-03 01:13:35.968069 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-03 01:13:35.968079 | orchestrator | Saturday 03 May 2025 01:05:23 +0000 (0:00:00.942) 0:00:01.326 ********** 2025-05-03 01:13:35.968090 | orchestrator | changed: [testbed-manager] 2025-05-03 01:13:35.968103 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.968120 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:13:35.968220 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:13:35.968246 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.968290 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.968334 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.968352 | orchestrator | 2025-05-03 01:13:35.968540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-03 01:13:35.968569 | orchestrator | Saturday 03 May 2025 01:05:25 +0000 (0:00:01.424) 0:00:02.750 ********** 2025-05-03 01:13:35.968591 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-03 01:13:35.968611 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-03 01:13:35.968639 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-03 01:13:35.968658 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-03 01:13:35.968677 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-03 01:13:35.968687 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-03 01:13:35.968697 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-03 01:13:35.968707 | orchestrator | 2025-05-03 01:13:35.968718 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-03 01:13:35.968728 | orchestrator | 2025-05-03 01:13:35.968738 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-03 01:13:35.968748 | orchestrator | Saturday 03 May 2025 01:05:27 +0000 (0:00:01.981) 0:00:04.731 ********** 2025-05-03 01:13:35.968759 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:13:35.968797 | orchestrator | 2025-05-03 01:13:35.968808 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-03 01:13:35.968818 | orchestrator | Saturday 03 May 2025 01:05:28 +0000 (0:00:01.226) 0:00:05.958 ********** 2025-05-03 01:13:35.968829 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-03 01:13:35.968840 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-03 01:13:35.968850 | orchestrator | 2025-05-03 01:13:35.968860 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-03 01:13:35.968871 | orchestrator | Saturday 03 May 2025 01:05:32 +0000 (0:00:04.483) 0:00:10.441 ********** 2025-05-03 01:13:35.968881 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-03 01:13:35.968892 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-03 01:13:35.968902 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.968912 | orchestrator | 2025-05-03 01:13:35.968922 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-03 01:13:35.968933 | orchestrator | Saturday 03 May 2025 01:05:37 +0000 (0:00:04.384) 0:00:14.825 ********** 2025-05-03 01:13:35.968961 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.968971 | orchestrator | 2025-05-03 01:13:35.968982 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-03 01:13:35.968992 | orchestrator | Saturday 03 May 2025 01:05:38 +0000 (0:00:00.965) 0:00:15.791 ********** 2025-05-03 01:13:35.969002 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.969012 | orchestrator | 2025-05-03 01:13:35.969025 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-03 01:13:35.969044 | orchestrator | Saturday 03 May 2025 01:05:40 +0000 (0:00:01.917) 0:00:17.708 ********** 2025-05-03 01:13:35.969082 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.969101 | orchestrator | 2025-05-03 01:13:35.969114 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-03 01:13:35.969130 | orchestrator | Saturday 03 May 2025 01:05:47 +0000 (0:00:07.531) 0:00:25.239 ********** 2025-05-03 01:13:35.969140 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.969150 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.969160 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.969171 | orchestrator | 2025-05-03 01:13:35.969181 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-03 01:13:35.969191 | orchestrator | Saturday 03 May 2025 01:05:49 +0000 (0:00:01.685) 0:00:26.925 ********** 2025-05-03 01:13:35.969200 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:13:35.969211 | orchestrator | 2025-05-03 01:13:35.969221 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-03 01:13:35.969232 | orchestrator | Saturday 03 May 2025 01:06:18 +0000 (0:00:29.299) 0:00:56.225 ********** 2025-05-03 01:13:35.969242 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.969252 | orchestrator | 2025-05-03 01:13:35.969262 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-03 01:13:35.969272 | orchestrator | Saturday 03 May 2025 01:06:30 +0000 (0:00:12.239) 0:01:08.464 ********** 2025-05-03 01:13:35.969282 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:13:35.969292 | orchestrator | 2025-05-03 01:13:35.969302 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-03 01:13:35.969312 | orchestrator | Saturday 03 May 2025 01:06:40 +0000 (0:00:09.830) 0:01:18.294 ********** 2025-05-03 01:13:35.969335 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:13:35.969346 | orchestrator | 2025-05-03 01:13:35.969356 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-03 01:13:35.969366 | orchestrator | Saturday 03 May 2025 01:06:41 +0000 (0:00:00.946) 0:01:19.241 ********** 2025-05-03 01:13:35.969381 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.969397 | orchestrator | 2025-05-03 01:13:35.969413 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-03 01:13:35.969440 | orchestrator | Saturday 03 May 2025 01:06:42 +0000 (0:00:00.605) 0:01:19.847 ********** 2025-05-03 01:13:35.969458 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:13:35.969474 | orchestrator | 2025-05-03 01:13:35.969491 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-03 01:13:35.969508 | orchestrator | Saturday 03 May 2025 01:06:43 +0000 (0:00:00.777) 0:01:20.624 ********** 2025-05-03 01:13:35.969524 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:13:35.969542 | orchestrator | 2025-05-03 01:13:35.969559 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-03 01:13:35.969710 | orchestrator | Saturday 03 May 2025 01:06:58 +0000 (0:00:15.157) 0:01:35.781 ********** 2025-05-03 01:13:35.969724 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.969734 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.969745 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.969755 | orchestrator | 2025-05-03 01:13:35.969765 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-03 01:13:35.969775 | orchestrator | 2025-05-03 01:13:35.969785 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-03 01:13:35.969795 | orchestrator | Saturday 03 May 2025 01:06:58 +0000 (0:00:00.306) 0:01:36.088 ********** 2025-05-03 01:13:35.969806 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:13:35.969816 | orchestrator | 2025-05-03 01:13:35.969826 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-03 01:13:35.969836 | orchestrator | Saturday 03 May 2025 01:06:59 +0000 (0:00:00.990) 0:01:37.079 ********** 2025-05-03 01:13:35.969846 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.969856 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.969866 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.969876 | orchestrator | 2025-05-03 01:13:35.969886 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-03 01:13:35.969897 | orchestrator | Saturday 03 May 2025 01:07:01 +0000 (0:00:02.405) 0:01:39.484 ********** 2025-05-03 01:13:35.969907 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.969917 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.969927 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.969964 | orchestrator | 2025-05-03 01:13:35.969975 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-03 01:13:35.969985 | orchestrator | Saturday 03 May 2025 01:07:04 +0000 (0:00:02.199) 0:01:41.684 ********** 2025-05-03 01:13:35.969996 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.970006 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970056 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970070 | orchestrator | 2025-05-03 01:13:35.970081 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-03 01:13:35.970091 | orchestrator | Saturday 03 May 2025 01:07:04 +0000 (0:00:00.631) 0:01:42.316 ********** 2025-05-03 01:13:35.970101 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-03 01:13:35.970112 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970122 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-03 01:13:35.970132 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970142 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-03 01:13:35.970153 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-03 01:13:35.970163 | orchestrator | 2025-05-03 01:13:35.970173 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-03 01:13:35.970183 | orchestrator | Saturday 03 May 2025 01:07:13 +0000 (0:00:08.450) 0:01:50.766 ********** 2025-05-03 01:13:35.970193 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.970204 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970214 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970224 | orchestrator | 2025-05-03 01:13:35.970234 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-03 01:13:35.970260 | orchestrator | Saturday 03 May 2025 01:07:13 +0000 (0:00:00.440) 0:01:51.207 ********** 2025-05-03 01:13:35.970271 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-03 01:13:35.970281 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.970291 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-03 01:13:35.970302 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970313 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-03 01:13:35.970332 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970351 | orchestrator | 2025-05-03 01:13:35.970369 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-03 01:13:35.970387 | orchestrator | Saturday 03 May 2025 01:07:15 +0000 (0:00:01.551) 0:01:52.759 ********** 2025-05-03 01:13:35.970407 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970425 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970437 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.970447 | orchestrator | 2025-05-03 01:13:35.970458 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-03 01:13:35.970468 | orchestrator | Saturday 03 May 2025 01:07:15 +0000 (0:00:00.500) 0:01:53.259 ********** 2025-05-03 01:13:35.970478 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970488 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970497 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.970508 | orchestrator | 2025-05-03 01:13:35.970517 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-03 01:13:35.970527 | orchestrator | Saturday 03 May 2025 01:07:16 +0000 (0:00:00.927) 0:01:54.187 ********** 2025-05-03 01:13:35.970537 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970557 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970568 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.970578 | orchestrator | 2025-05-03 01:13:35.970588 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-03 01:13:35.970598 | orchestrator | Saturday 03 May 2025 01:07:18 +0000 (0:00:02.053) 0:01:56.241 ********** 2025-05-03 01:13:35.970608 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970618 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970628 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:13:35.970639 | orchestrator | 2025-05-03 01:13:35.970649 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-03 01:13:35.970659 | orchestrator | Saturday 03 May 2025 01:07:38 +0000 (0:00:19.361) 0:02:15.602 ********** 2025-05-03 01:13:35.970669 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970679 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970689 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:13:35.970699 | orchestrator | 2025-05-03 01:13:35.970709 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-03 01:13:35.970719 | orchestrator | Saturday 03 May 2025 01:07:48 +0000 (0:00:10.016) 0:02:25.619 ********** 2025-05-03 01:13:35.970729 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:13:35.970745 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970756 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970765 | orchestrator | 2025-05-03 01:13:35.970775 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-03 01:13:35.970786 | orchestrator | Saturday 03 May 2025 01:07:49 +0000 (0:00:01.339) 0:02:26.958 ********** 2025-05-03 01:13:35.970796 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970807 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970817 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.970827 | orchestrator | 2025-05-03 01:13:35.970837 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-03 01:13:35.970847 | orchestrator | Saturday 03 May 2025 01:07:59 +0000 (0:00:10.338) 0:02:37.297 ********** 2025-05-03 01:13:35.970857 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.970874 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.970885 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.970895 | orchestrator | 2025-05-03 01:13:35.970904 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-03 01:13:35.970918 | orchestrator | Saturday 03 May 2025 01:08:01 +0000 (0:00:01.679) 0:02:38.976 ********** 2025-05-03 01:13:35.970935 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.971005 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.971021 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.971032 | orchestrator | 2025-05-03 01:13:35.971045 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-03 01:13:35.971062 | orchestrator | 2025-05-03 01:13:35.971127 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-03 01:13:35.971160 | orchestrator | Saturday 03 May 2025 01:08:01 +0000 (0:00:00.480) 0:02:39.457 ********** 2025-05-03 01:13:35.971177 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:13:35.971196 | orchestrator | 2025-05-03 01:13:35.971235 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-03 01:13:35.971246 | orchestrator | Saturday 03 May 2025 01:08:02 +0000 (0:00:00.789) 0:02:40.247 ********** 2025-05-03 01:13:35.971256 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-03 01:13:35.971266 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-03 01:13:35.971276 | orchestrator | 2025-05-03 01:13:35.971286 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-03 01:13:35.971296 | orchestrator | Saturday 03 May 2025 01:08:05 +0000 (0:00:03.308) 0:02:43.555 ********** 2025-05-03 01:13:35.971306 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-03 01:13:35.971319 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-03 01:13:35.971329 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-03 01:13:35.971340 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-03 01:13:35.971350 | orchestrator | 2025-05-03 01:13:35.971361 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-03 01:13:35.971377 | orchestrator | Saturday 03 May 2025 01:08:12 +0000 (0:00:06.458) 0:02:50.014 ********** 2025-05-03 01:13:35.971387 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-03 01:13:35.971398 | orchestrator | 2025-05-03 01:13:35.971408 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-03 01:13:35.971418 | orchestrator | Saturday 03 May 2025 01:08:15 +0000 (0:00:03.120) 0:02:53.135 ********** 2025-05-03 01:13:35.971428 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-03 01:13:35.971438 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-03 01:13:35.971449 | orchestrator | 2025-05-03 01:13:35.971459 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-03 01:13:35.971469 | orchestrator | Saturday 03 May 2025 01:08:19 +0000 (0:00:03.946) 0:02:57.082 ********** 2025-05-03 01:13:35.971479 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-03 01:13:35.971496 | orchestrator | 2025-05-03 01:13:35.971519 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-03 01:13:35.971536 | orchestrator | Saturday 03 May 2025 01:08:22 +0000 (0:00:03.144) 0:03:00.226 ********** 2025-05-03 01:13:35.971553 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-03 01:13:35.971568 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-03 01:13:35.971585 | orchestrator | 2025-05-03 01:13:35.971601 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-03 01:13:35.971638 | orchestrator | Saturday 03 May 2025 01:08:30 +0000 (0:00:07.902) 0:03:08.129 ********** 2025-05-03 01:13:35.971660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.971774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.971789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.971802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.971824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.971843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.971855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.971865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.971877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.971887 | orchestrator | 2025-05-03 01:13:35.971898 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-03 01:13:35.971908 | orchestrator | Saturday 03 May 2025 01:08:32 +0000 (0:00:01.618) 0:03:09.747 ********** 2025-05-03 01:13:35.971918 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.971929 | orchestrator | 2025-05-03 01:13:35.971964 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-03 01:13:35.971984 | orchestrator | Saturday 03 May 2025 01:08:32 +0000 (0:00:00.131) 0:03:09.878 ********** 2025-05-03 01:13:35.971994 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.972004 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.972015 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.972025 | orchestrator | 2025-05-03 01:13:35.972035 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-03 01:13:35.972046 | orchestrator | Saturday 03 May 2025 01:08:32 +0000 (0:00:00.460) 0:03:10.339 ********** 2025-05-03 01:13:35.972056 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-03 01:13:35.972066 | orchestrator | 2025-05-03 01:13:35.972082 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-03 01:13:35.972092 | orchestrator | Saturday 03 May 2025 01:08:33 +0000 (0:00:00.426) 0:03:10.766 ********** 2025-05-03 01:13:35.972102 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.972112 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.972122 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.972132 | orchestrator | 2025-05-03 01:13:35.972143 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-03 01:13:35.972152 | orchestrator | Saturday 03 May 2025 01:08:33 +0000 (0:00:00.351) 0:03:11.118 ********** 2025-05-03 01:13:35.972163 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:13:35.972173 | orchestrator | 2025-05-03 01:13:35.972183 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-03 01:13:35.972193 | orchestrator | Saturday 03 May 2025 01:08:34 +0000 (0:00:00.940) 0:03:12.059 ********** 2025-05-03 01:13:35.972204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972291 | orchestrator | 2025-05-03 01:13:35.972302 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-03 01:13:35.972312 | orchestrator | Saturday 03 May 2025 01:08:37 +0000 (0:00:02.719) 0:03:14.778 ********** 2025-05-03 01:13:35.972323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.972340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972355 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.972367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.972378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972388 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.972398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.972415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972426 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.972436 | orchestrator | 2025-05-03 01:13:35.972446 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-03 01:13:35.972457 | orchestrator | Saturday 03 May 2025 01:08:37 +0000 (0:00:00.788) 0:03:15.567 ********** 2025-05-03 01:13:35.972474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.972486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972496 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.972507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.972524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972534 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.972553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.972564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972574 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.972585 | orchestrator | 2025-05-03 01:13:35.972595 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-03 01:13:35.972605 | orchestrator | Saturday 03 May 2025 01:08:39 +0000 (0:00:01.117) 0:03:16.684 ********** 2025-05-03 01:13:35.972616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972740 | orchestrator | 2025-05-03 01:13:35.972750 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-03 01:13:35.972761 | orchestrator | Saturday 03 May 2025 01:08:41 +0000 (0:00:02.599) 0:03:19.284 ********** 2025-05-03 01:13:35.972771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.972848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.972920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.972931 | orchestrator | 2025-05-03 01:13:35.972988 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-03 01:13:35.973000 | orchestrator | Saturday 03 May 2025 01:08:47 +0000 (0:00:06.053) 0:03:25.338 ********** 2025-05-03 01:13:35.973011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.973037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973059 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.973070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.973088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973110 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.973134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-03 01:13:35.973145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973166 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.973176 | orchestrator | 2025-05-03 01:13:35.973187 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-03 01:13:35.973197 | orchestrator | Saturday 03 May 2025 01:08:48 +0000 (0:00:00.792) 0:03:26.130 ********** 2025-05-03 01:13:35.973208 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.973218 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:13:35.973228 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:13:35.973238 | orchestrator | 2025-05-03 01:13:35.973248 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-03 01:13:35.973258 | orchestrator | Saturday 03 May 2025 01:08:50 +0000 (0:00:01.627) 0:03:27.757 ********** 2025-05-03 01:13:35.973273 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.973283 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.973293 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.973304 | orchestrator | 2025-05-03 01:13:35.973314 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-03 01:13:35.973328 | orchestrator | Saturday 03 May 2025 01:08:50 +0000 (0:00:00.455) 0:03:28.213 ********** 2025-05-03 01:13:35.973339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.973366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.973377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-03 01:13:35.973393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.973411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.973422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.973462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.973472 | orchestrator | 2025-05-03 01:13:35.973482 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-03 01:13:35.973493 | orchestrator | Saturday 03 May 2025 01:08:52 +0000 (0:00:02.144) 0:03:30.357 ********** 2025-05-03 01:13:35.973503 | orchestrator | 2025-05-03 01:13:35.973513 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-03 01:13:35.973523 | orchestrator | Saturday 03 May 2025 01:08:53 +0000 (0:00:00.267) 0:03:30.624 ********** 2025-05-03 01:13:35.973533 | orchestrator | 2025-05-03 01:13:35.973544 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-03 01:13:35.973554 | orchestrator | Saturday 03 May 2025 01:08:53 +0000 (0:00:00.107) 0:03:30.732 ********** 2025-05-03 01:13:35.973564 | orchestrator | 2025-05-03 01:13:35.973579 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-03 01:13:35.973595 | orchestrator | Saturday 03 May 2025 01:08:53 +0000 (0:00:00.334) 0:03:31.066 ********** 2025-05-03 01:13:35.973605 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.973615 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:13:35.973625 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:13:35.973636 | orchestrator | 2025-05-03 01:13:35.973646 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-03 01:13:35.973656 | orchestrator | Saturday 03 May 2025 01:09:09 +0000 (0:00:15.549) 0:03:46.616 ********** 2025-05-03 01:13:35.973666 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.973676 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:13:35.973686 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:13:35.973696 | orchestrator | 2025-05-03 01:13:35.973706 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-03 01:13:35.973716 | orchestrator | 2025-05-03 01:13:35.973727 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-03 01:13:35.973737 | orchestrator | Saturday 03 May 2025 01:09:20 +0000 (0:00:11.780) 0:03:58.397 ********** 2025-05-03 01:13:35.973747 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:13:35.973759 | orchestrator | 2025-05-03 01:13:35.973769 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-03 01:13:35.973779 | orchestrator | Saturday 03 May 2025 01:09:22 +0000 (0:00:01.456) 0:03:59.853 ********** 2025-05-03 01:13:35.973789 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.973799 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.973810 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.973820 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.973830 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.973840 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.973850 | orchestrator | 2025-05-03 01:13:35.973860 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-03 01:13:35.973870 | orchestrator | Saturday 03 May 2025 01:09:23 +0000 (0:00:00.753) 0:04:00.607 ********** 2025-05-03 01:13:35.973880 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.973890 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.973900 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.973910 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:13:35.973920 | orchestrator | 2025-05-03 01:13:35.973931 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-03 01:13:35.973956 | orchestrator | Saturday 03 May 2025 01:09:24 +0000 (0:00:01.374) 0:04:01.981 ********** 2025-05-03 01:13:35.973967 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-03 01:13:35.973977 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-03 01:13:35.973987 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-03 01:13:35.973998 | orchestrator | 2025-05-03 01:13:35.974008 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-03 01:13:35.974044 | orchestrator | Saturday 03 May 2025 01:09:25 +0000 (0:00:00.674) 0:04:02.655 ********** 2025-05-03 01:13:35.974057 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-03 01:13:35.974067 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-03 01:13:35.974078 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-03 01:13:35.974088 | orchestrator | 2025-05-03 01:13:35.974098 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-03 01:13:35.974108 | orchestrator | Saturday 03 May 2025 01:09:26 +0000 (0:00:01.331) 0:04:03.987 ********** 2025-05-03 01:13:35.974118 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-03 01:13:35.974129 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.974139 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-03 01:13:35.974149 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.974171 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-03 01:13:35.974181 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.974191 | orchestrator | 2025-05-03 01:13:35.974201 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-03 01:13:35.974212 | orchestrator | Saturday 03 May 2025 01:09:27 +0000 (0:00:00.639) 0:04:04.627 ********** 2025-05-03 01:13:35.974222 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-03 01:13:35.974232 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-03 01:13:35.974242 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.974252 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-03 01:13:35.974262 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-03 01:13:35.974272 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-03 01:13:35.974287 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-03 01:13:35.974297 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-03 01:13:35.974307 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.974317 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-03 01:13:35.974327 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-03 01:13:35.974337 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.974347 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-03 01:13:35.974357 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-03 01:13:35.974368 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-03 01:13:35.974378 | orchestrator | 2025-05-03 01:13:35.974393 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-03 01:13:35.974870 | orchestrator | Saturday 03 May 2025 01:09:28 +0000 (0:00:01.263) 0:04:05.890 ********** 2025-05-03 01:13:35.974910 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.974927 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.974955 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.974966 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.974976 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.974986 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.974996 | orchestrator | 2025-05-03 01:13:35.975007 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-03 01:13:35.975017 | orchestrator | Saturday 03 May 2025 01:09:29 +0000 (0:00:01.150) 0:04:07.041 ********** 2025-05-03 01:13:35.975027 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.975038 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.975048 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.975058 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.975068 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.975078 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.975088 | orchestrator | 2025-05-03 01:13:35.975098 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-03 01:13:35.975108 | orchestrator | Saturday 03 May 2025 01:09:31 +0000 (0:00:01.673) 0:04:08.715 ********** 2025-05-03 01:13:35.975120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.975145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.975156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.975212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.975232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.975244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.975268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.975512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.975537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.976269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.976306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.976326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.976351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.976363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.976374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.976385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.976481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.976508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.976518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.976536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.976545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.976554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.976564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.976631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.976666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.976690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.976700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.976709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.976718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.976727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.976821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.976838 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.976853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.976862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.976881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.976890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.976983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.977004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.977028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.977060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.977130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.977161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.977178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.977198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.977208 | orchestrator | 2025-05-03 01:13:35.977217 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-03 01:13:35.977227 | orchestrator | Saturday 03 May 2025 01:09:33 +0000 (0:00:02.672) 0:04:11.387 ********** 2025-05-03 01:13:35.977236 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-03 01:13:35.977247 | orchestrator | 2025-05-03 01:13:35.977256 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-03 01:13:35.977265 | orchestrator | Saturday 03 May 2025 01:09:35 +0000 (0:00:01.545) 0:04:12.933 ********** 2025-05-03 01:13:35.977327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.977698 | orchestrator | 2025-05-03 01:13:35.977707 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-03 01:13:35.977716 | orchestrator | Saturday 03 May 2025 01:09:39 +0000 (0:00:03.789) 0:04:16.722 ********** 2025-05-03 01:13:35.977725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.977779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.977806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.977815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.977824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.977834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.977842 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.977852 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.977861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.977933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.977996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978005 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.978069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.978079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978088 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.978097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.978114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978123 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.978184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.978207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978216 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.978224 | orchestrator | 2025-05-03 01:13:35.978233 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-03 01:13:35.978241 | orchestrator | Saturday 03 May 2025 01:09:41 +0000 (0:00:01.952) 0:04:18.674 ********** 2025-05-03 01:13:35.978249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.978257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.978277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978292 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.978328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.978339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.978347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978355 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.978364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.978372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.978394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978402 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.978428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.978438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978447 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.978455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.978463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978477 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.978486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.978494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.978509 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.978517 | orchestrator | 2025-05-03 01:13:35.978525 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-03 01:13:35.978534 | orchestrator | Saturday 03 May 2025 01:09:43 +0000 (0:00:02.518) 0:04:21.192 ********** 2025-05-03 01:13:35.978542 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.978550 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.978558 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.978566 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-03 01:13:35.978574 | orchestrator | 2025-05-03 01:13:35.978583 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-03 01:13:35.978591 | orchestrator | Saturday 03 May 2025 01:09:44 +0000 (0:00:01.288) 0:04:22.481 ********** 2025-05-03 01:13:35.978616 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-03 01:13:35.978625 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-03 01:13:35.978633 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-03 01:13:35.978641 | orchestrator | 2025-05-03 01:13:35.978649 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-03 01:13:35.978657 | orchestrator | Saturday 03 May 2025 01:09:45 +0000 (0:00:00.946) 0:04:23.427 ********** 2025-05-03 01:13:35.978665 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-03 01:13:35.978673 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-03 01:13:35.978681 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-03 01:13:35.978689 | orchestrator | 2025-05-03 01:13:35.978697 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-03 01:13:35.978705 | orchestrator | Saturday 03 May 2025 01:09:46 +0000 (0:00:00.944) 0:04:24.372 ********** 2025-05-03 01:13:35.978712 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:13:35.978721 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:13:35.978729 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:13:35.978737 | orchestrator | 2025-05-03 01:13:35.978744 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-03 01:13:35.978752 | orchestrator | Saturday 03 May 2025 01:09:47 +0000 (0:00:00.853) 0:04:25.225 ********** 2025-05-03 01:13:35.978760 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:13:35.978768 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:13:35.978775 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:13:35.978783 | orchestrator | 2025-05-03 01:13:35.978791 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-03 01:13:35.978803 | orchestrator | Saturday 03 May 2025 01:09:47 +0000 (0:00:00.318) 0:04:25.543 ********** 2025-05-03 01:13:35.978811 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-03 01:13:35.978828 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-03 01:13:35.978837 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-03 01:13:35.978846 | orchestrator | 2025-05-03 01:13:35.978855 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-03 01:13:35.978864 | orchestrator | Saturday 03 May 2025 01:09:49 +0000 (0:00:01.350) 0:04:26.894 ********** 2025-05-03 01:13:35.978873 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-03 01:13:35.978882 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-03 01:13:35.978892 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-03 01:13:35.978900 | orchestrator | 2025-05-03 01:13:35.978909 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-03 01:13:35.978918 | orchestrator | Saturday 03 May 2025 01:09:50 +0000 (0:00:01.382) 0:04:28.277 ********** 2025-05-03 01:13:35.978927 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-03 01:13:35.978950 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-03 01:13:35.978960 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-03 01:13:35.978969 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-03 01:13:35.978982 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-03 01:13:35.978991 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-03 01:13:35.979000 | orchestrator | 2025-05-03 01:13:35.979009 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-03 01:13:35.979018 | orchestrator | Saturday 03 May 2025 01:09:55 +0000 (0:00:04.952) 0:04:33.230 ********** 2025-05-03 01:13:35.979026 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.979035 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.979044 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.979053 | orchestrator | 2025-05-03 01:13:35.979061 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-03 01:13:35.979070 | orchestrator | Saturday 03 May 2025 01:09:56 +0000 (0:00:00.480) 0:04:33.710 ********** 2025-05-03 01:13:35.979079 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.979088 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.979097 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.979105 | orchestrator | 2025-05-03 01:13:35.979114 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-03 01:13:35.979123 | orchestrator | Saturday 03 May 2025 01:09:56 +0000 (0:00:00.508) 0:04:34.218 ********** 2025-05-03 01:13:35.979132 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.979140 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.979150 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.979160 | orchestrator | 2025-05-03 01:13:35.979170 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-03 01:13:35.979179 | orchestrator | Saturday 03 May 2025 01:09:58 +0000 (0:00:01.561) 0:04:35.780 ********** 2025-05-03 01:13:35.979188 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-03 01:13:35.979197 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-03 01:13:35.979209 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-03 01:13:35.979217 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-03 01:13:35.979226 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-03 01:13:35.979234 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-03 01:13:35.979247 | orchestrator | 2025-05-03 01:13:35.979255 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-03 01:13:35.979282 | orchestrator | Saturday 03 May 2025 01:10:01 +0000 (0:00:03.610) 0:04:39.391 ********** 2025-05-03 01:13:35.979292 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-03 01:13:35.979300 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-03 01:13:35.979308 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-03 01:13:35.979316 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-03 01:13:35.979324 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.979333 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-03 01:13:35.979341 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.979350 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-03 01:13:35.979358 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.979366 | orchestrator | 2025-05-03 01:13:35.979374 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-03 01:13:35.979382 | orchestrator | Saturday 03 May 2025 01:10:05 +0000 (0:00:03.404) 0:04:42.795 ********** 2025-05-03 01:13:35.979390 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.979398 | orchestrator | 2025-05-03 01:13:35.979406 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-03 01:13:35.979414 | orchestrator | Saturday 03 May 2025 01:10:05 +0000 (0:00:00.129) 0:04:42.925 ********** 2025-05-03 01:13:35.979422 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.979430 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.979438 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.979446 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.979454 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.979462 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.979470 | orchestrator | 2025-05-03 01:13:35.979478 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-03 01:13:35.979486 | orchestrator | Saturday 03 May 2025 01:10:06 +0000 (0:00:00.916) 0:04:43.842 ********** 2025-05-03 01:13:35.979494 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-03 01:13:35.979502 | orchestrator | 2025-05-03 01:13:35.979510 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-03 01:13:35.979518 | orchestrator | Saturday 03 May 2025 01:10:06 +0000 (0:00:00.392) 0:04:44.235 ********** 2025-05-03 01:13:35.979526 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.979534 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.979542 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.979550 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.979558 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.979566 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.979574 | orchestrator | 2025-05-03 01:13:35.979582 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-03 01:13:35.979590 | orchestrator | Saturday 03 May 2025 01:10:07 +0000 (0:00:00.773) 0:04:45.008 ********** 2025-05-03 01:13:35.979598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.979612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.979637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.979654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.979663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.979671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.979680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.979693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.979724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.979735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.979743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.979752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.979766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.979774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.979803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.979812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.979821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.979829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.979837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.979857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.979866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.979892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.979902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.979910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.979919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.979927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.979965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.979975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.979983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.980039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980277 | orchestrator | 2025-05-03 01:13:35.980286 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-03 01:13:35.980294 | orchestrator | Saturday 03 May 2025 01:10:11 +0000 (0:00:03.981) 0:04:48.990 ********** 2025-05-03 01:13:35.980302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.980316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.980330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.980375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.980399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.980407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.980443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.980480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.980493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.980519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.980561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.980575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.980583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.980598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.980607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.980633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980656 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980679 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.980746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.980788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.980817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.980825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.980898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.980931 | orchestrator | 2025-05-03 01:13:35.980954 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-03 01:13:35.980962 | orchestrator | Saturday 03 May 2025 01:10:18 +0000 (0:00:07.306) 0:04:56.296 ********** 2025-05-03 01:13:35.980970 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.980979 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.980987 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.980995 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.981003 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.981011 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.981019 | orchestrator | 2025-05-03 01:13:35.981027 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-03 01:13:35.981035 | orchestrator | Saturday 03 May 2025 01:10:20 +0000 (0:00:01.851) 0:04:58.147 ********** 2025-05-03 01:13:35.981048 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-03 01:13:35.981059 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-03 01:13:35.981068 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-03 01:13:35.981076 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-03 01:13:35.981084 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.981110 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-03 01:13:35.981119 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-03 01:13:35.981128 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-03 01:13:35.981136 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.981144 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-03 01:13:35.981151 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-03 01:13:35.981159 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.981167 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-03 01:13:35.981175 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-03 01:13:35.981183 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-03 01:13:35.981191 | orchestrator | 2025-05-03 01:13:35.981199 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-03 01:13:35.981207 | orchestrator | Saturday 03 May 2025 01:10:25 +0000 (0:00:04.968) 0:05:03.116 ********** 2025-05-03 01:13:35.981215 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.981223 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.981231 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.981239 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.981247 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.981255 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.981263 | orchestrator | 2025-05-03 01:13:35.981270 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-03 01:13:35.981278 | orchestrator | Saturday 03 May 2025 01:10:26 +0000 (0:00:00.909) 0:05:04.026 ********** 2025-05-03 01:13:35.981286 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-03 01:13:35.981295 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-03 01:13:35.981303 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-03 01:13:35.981311 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-03 01:13:35.981319 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-03 01:13:35.981327 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-03 01:13:35.981335 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-03 01:13:35.981343 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-03 01:13:35.981350 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-03 01:13:35.981358 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-03 01:13:35.981371 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.981383 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-03 01:13:35.981391 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.981399 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-03 01:13:35.981407 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.981415 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-03 01:13:35.981423 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-03 01:13:35.981430 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-03 01:13:35.981438 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-03 01:13:35.981446 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-03 01:13:35.981454 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-03 01:13:35.981462 | orchestrator | 2025-05-03 01:13:35.981470 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-03 01:13:35.981478 | orchestrator | Saturday 03 May 2025 01:10:33 +0000 (0:00:06.926) 0:05:10.952 ********** 2025-05-03 01:13:35.981486 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-03 01:13:35.981494 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-03 01:13:35.981519 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-03 01:13:35.981529 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-03 01:13:35.981537 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-03 01:13:35.981545 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-03 01:13:35.981553 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-03 01:13:35.981561 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-03 01:13:35.981569 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-03 01:13:35.981577 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-03 01:13:35.981584 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-03 01:13:35.981592 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-03 01:13:35.981600 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-03 01:13:35.981608 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.981616 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-03 01:13:35.981624 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.981632 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-03 01:13:35.981640 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.981648 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-03 01:13:35.981656 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-03 01:13:35.981664 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-03 01:13:35.981677 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-03 01:13:35.981685 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-03 01:13:35.981693 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-03 01:13:35.981701 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-03 01:13:35.981708 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-03 01:13:35.981716 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-03 01:13:35.981724 | orchestrator | 2025-05-03 01:13:35.981732 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-03 01:13:35.981740 | orchestrator | Saturday 03 May 2025 01:10:42 +0000 (0:00:09.331) 0:05:20.284 ********** 2025-05-03 01:13:35.981748 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.981756 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.981764 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.981772 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.981780 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.981788 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.981796 | orchestrator | 2025-05-03 01:13:35.981804 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-03 01:13:35.981812 | orchestrator | Saturday 03 May 2025 01:10:43 +0000 (0:00:00.588) 0:05:20.872 ********** 2025-05-03 01:13:35.981820 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.981828 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.981836 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.981844 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.981852 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.981859 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.981871 | orchestrator | 2025-05-03 01:13:35.981879 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-03 01:13:35.981890 | orchestrator | Saturday 03 May 2025 01:10:44 +0000 (0:00:00.713) 0:05:21.586 ********** 2025-05-03 01:13:35.981899 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.981907 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.981914 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.981922 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.981930 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.981953 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.981961 | orchestrator | 2025-05-03 01:13:35.981969 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-03 01:13:35.981977 | orchestrator | Saturday 03 May 2025 01:10:47 +0000 (0:00:02.989) 0:05:24.575 ********** 2025-05-03 01:13:35.982012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.982084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982144 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.982153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.982219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982274 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.982282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.982317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982342 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.982356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.982412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982469 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.982477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.982502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982542 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.982550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.982597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.982606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.982631 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.982639 | orchestrator | 2025-05-03 01:13:35.982647 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-03 01:13:35.982655 | orchestrator | Saturday 03 May 2025 01:10:49 +0000 (0:00:02.066) 0:05:26.641 ********** 2025-05-03 01:13:35.982664 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-03 01:13:35.982672 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-03 01:13:35.982680 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.982688 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-03 01:13:35.982696 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-03 01:13:35.982705 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.982713 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-03 01:13:35.982725 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-03 01:13:35.982733 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.982741 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-03 01:13:35.982750 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-03 01:13:35.982757 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.982766 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-03 01:13:35.982774 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-03 01:13:35.982782 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.982790 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-03 01:13:35.982798 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-03 01:13:35.982806 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.982813 | orchestrator | 2025-05-03 01:13:35.982822 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-03 01:13:35.982829 | orchestrator | Saturday 03 May 2025 01:10:49 +0000 (0:00:00.799) 0:05:27.441 ********** 2025-05-03 01:13:35.982848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.982858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-03 01:13:35.982914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-03 01:13:35.982922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.982931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-03 01:13:35.982988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.982998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.983034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.983072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.983115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.983154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.983183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-03 01:13:35.983222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-03 01:13:35.983231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-03 01:13:35.983350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-03 01:13:35.983400 | orchestrator | 2025-05-03 01:13:35.983407 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-03 01:13:35.983414 | orchestrator | Saturday 03 May 2025 01:10:53 +0000 (0:00:03.375) 0:05:30.817 ********** 2025-05-03 01:13:35.983421 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.983428 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.983435 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.983442 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.983449 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.983456 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.983463 | orchestrator | 2025-05-03 01:13:35.983470 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-03 01:13:35.983477 | orchestrator | Saturday 03 May 2025 01:10:53 +0000 (0:00:00.740) 0:05:31.557 ********** 2025-05-03 01:13:35.983484 | orchestrator | 2025-05-03 01:13:35.983491 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-03 01:13:35.983498 | orchestrator | Saturday 03 May 2025 01:10:54 +0000 (0:00:00.298) 0:05:31.855 ********** 2025-05-03 01:13:35.983505 | orchestrator | 2025-05-03 01:13:35.983512 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-03 01:13:35.983519 | orchestrator | Saturday 03 May 2025 01:10:54 +0000 (0:00:00.106) 0:05:31.962 ********** 2025-05-03 01:13:35.983526 | orchestrator | 2025-05-03 01:13:35.983533 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-03 01:13:35.983540 | orchestrator | Saturday 03 May 2025 01:10:54 +0000 (0:00:00.107) 0:05:32.070 ********** 2025-05-03 01:13:35.983547 | orchestrator | 2025-05-03 01:13:35.983554 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-03 01:13:35.983561 | orchestrator | Saturday 03 May 2025 01:10:54 +0000 (0:00:00.307) 0:05:32.378 ********** 2025-05-03 01:13:35.983568 | orchestrator | 2025-05-03 01:13:35.983575 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-03 01:13:35.983582 | orchestrator | Saturday 03 May 2025 01:10:54 +0000 (0:00:00.106) 0:05:32.484 ********** 2025-05-03 01:13:35.983589 | orchestrator | 2025-05-03 01:13:35.983596 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-03 01:13:35.983602 | orchestrator | Saturday 03 May 2025 01:10:55 +0000 (0:00:00.318) 0:05:32.803 ********** 2025-05-03 01:13:35.983609 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.983616 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:13:35.983623 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:13:35.983630 | orchestrator | 2025-05-03 01:13:35.983637 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-03 01:13:35.983644 | orchestrator | Saturday 03 May 2025 01:11:07 +0000 (0:00:12.607) 0:05:45.410 ********** 2025-05-03 01:13:35.983651 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.983658 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:13:35.983669 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:13:35.983676 | orchestrator | 2025-05-03 01:13:35.983683 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-03 01:13:35.983690 | orchestrator | Saturday 03 May 2025 01:11:23 +0000 (0:00:15.586) 0:06:00.996 ********** 2025-05-03 01:13:35.983701 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.983708 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.983719 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.983731 | orchestrator | 2025-05-03 01:13:35.983739 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-03 01:13:35.983746 | orchestrator | Saturday 03 May 2025 01:11:40 +0000 (0:00:16.928) 0:06:17.925 ********** 2025-05-03 01:13:35.983753 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.983760 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.983767 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.983774 | orchestrator | 2025-05-03 01:13:35.983781 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-03 01:13:35.983788 | orchestrator | Saturday 03 May 2025 01:12:06 +0000 (0:00:26.583) 0:06:44.508 ********** 2025-05-03 01:13:35.983795 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.983802 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.983809 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.983816 | orchestrator | 2025-05-03 01:13:35.983823 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-03 01:13:35.983833 | orchestrator | Saturday 03 May 2025 01:12:07 +0000 (0:00:00.885) 0:06:45.394 ********** 2025-05-03 01:13:35.983840 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.983847 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.983854 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.983861 | orchestrator | 2025-05-03 01:13:35.983868 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-03 01:13:35.983875 | orchestrator | Saturday 03 May 2025 01:12:08 +0000 (0:00:00.762) 0:06:46.157 ********** 2025-05-03 01:13:35.983882 | orchestrator | changed: [testbed-node-3] 2025-05-03 01:13:35.983889 | orchestrator | changed: [testbed-node-5] 2025-05-03 01:13:35.983896 | orchestrator | changed: [testbed-node-4] 2025-05-03 01:13:35.983903 | orchestrator | 2025-05-03 01:13:35.983910 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-03 01:13:35.983917 | orchestrator | Saturday 03 May 2025 01:12:29 +0000 (0:00:20.574) 0:07:06.731 ********** 2025-05-03 01:13:35.983924 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.983931 | orchestrator | 2025-05-03 01:13:35.983950 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-03 01:13:35.983958 | orchestrator | Saturday 03 May 2025 01:12:29 +0000 (0:00:00.136) 0:07:06.867 ********** 2025-05-03 01:13:35.983965 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.983972 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.983979 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.983986 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.983993 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.984003 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-03 01:13:35.984011 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-03 01:13:35.984018 | orchestrator | 2025-05-03 01:13:35.984025 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-03 01:13:35.984032 | orchestrator | Saturday 03 May 2025 01:12:51 +0000 (0:00:22.477) 0:07:29.344 ********** 2025-05-03 01:13:35.984038 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.984045 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.984052 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.984059 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.984066 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.984073 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.984085 | orchestrator | 2025-05-03 01:13:35.984092 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-03 01:13:35.984099 | orchestrator | Saturday 03 May 2025 01:13:01 +0000 (0:00:09.332) 0:07:38.677 ********** 2025-05-03 01:13:35.984106 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.984113 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.984120 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.984127 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.984134 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.984141 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-03 01:13:35.984148 | orchestrator | 2025-05-03 01:13:35.984155 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-03 01:13:35.984162 | orchestrator | Saturday 03 May 2025 01:13:04 +0000 (0:00:03.079) 0:07:41.757 ********** 2025-05-03 01:13:35.984168 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-03 01:13:35.984176 | orchestrator | 2025-05-03 01:13:35.984182 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-03 01:13:35.984189 | orchestrator | Saturday 03 May 2025 01:13:14 +0000 (0:00:10.049) 0:07:51.807 ********** 2025-05-03 01:13:35.984196 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-03 01:13:35.984203 | orchestrator | 2025-05-03 01:13:35.984210 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-03 01:13:35.984217 | orchestrator | Saturday 03 May 2025 01:13:15 +0000 (0:00:01.144) 0:07:52.951 ********** 2025-05-03 01:13:35.984224 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.984231 | orchestrator | 2025-05-03 01:13:35.984238 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-03 01:13:35.984245 | orchestrator | Saturday 03 May 2025 01:13:16 +0000 (0:00:01.265) 0:07:54.217 ********** 2025-05-03 01:13:35.984252 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-03 01:13:35.984259 | orchestrator | 2025-05-03 01:13:35.984266 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-03 01:13:35.984273 | orchestrator | Saturday 03 May 2025 01:13:25 +0000 (0:00:08.865) 0:08:03.082 ********** 2025-05-03 01:13:35.984280 | orchestrator | ok: [testbed-node-3] 2025-05-03 01:13:35.984287 | orchestrator | ok: [testbed-node-4] 2025-05-03 01:13:35.984294 | orchestrator | ok: [testbed-node-5] 2025-05-03 01:13:35.984300 | orchestrator | ok: [testbed-node-0] 2025-05-03 01:13:35.984307 | orchestrator | ok: [testbed-node-1] 2025-05-03 01:13:35.984314 | orchestrator | ok: [testbed-node-2] 2025-05-03 01:13:35.984321 | orchestrator | 2025-05-03 01:13:35.984331 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-03 01:13:35.984339 | orchestrator | 2025-05-03 01:13:35.984346 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-03 01:13:35.984353 | orchestrator | Saturday 03 May 2025 01:13:27 +0000 (0:00:02.234) 0:08:05.317 ********** 2025-05-03 01:13:35.984360 | orchestrator | changed: [testbed-node-0] 2025-05-03 01:13:35.984367 | orchestrator | changed: [testbed-node-1] 2025-05-03 01:13:35.984374 | orchestrator | changed: [testbed-node-2] 2025-05-03 01:13:35.984381 | orchestrator | 2025-05-03 01:13:35.984388 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-03 01:13:35.984395 | orchestrator | 2025-05-03 01:13:35.984401 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-03 01:13:35.984408 | orchestrator | Saturday 03 May 2025 01:13:28 +0000 (0:00:01.041) 0:08:06.358 ********** 2025-05-03 01:13:35.984415 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.984422 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.984429 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.984436 | orchestrator | 2025-05-03 01:13:35.984443 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-03 01:13:35.984450 | orchestrator | 2025-05-03 01:13:35.984458 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-03 01:13:35.984469 | orchestrator | Saturday 03 May 2025 01:13:29 +0000 (0:00:00.794) 0:08:07.153 ********** 2025-05-03 01:13:35.984476 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-03 01:13:35.984483 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-03 01:13:35.984490 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-03 01:13:35.984497 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-03 01:13:35.984504 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-03 01:13:35.984511 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-03 01:13:35.984518 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-03 01:13:35.984525 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-03 01:13:35.984532 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-03 01:13:35.984539 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-03 01:13:35.984545 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-03 01:13:35.984552 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-03 01:13:35.984559 | orchestrator | skipping: [testbed-node-3] 2025-05-03 01:13:35.984566 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-03 01:13:35.984573 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-03 01:13:35.984580 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-03 01:13:35.984590 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-03 01:13:35.984597 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-03 01:13:35.984604 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-03 01:13:35.984611 | orchestrator | skipping: [testbed-node-4] 2025-05-03 01:13:35.984618 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-03 01:13:35.984625 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-03 01:13:35.984632 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-03 01:13:35.984639 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-03 01:13:35.984646 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-03 01:13:35.984653 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-03 01:13:35.984660 | orchestrator | skipping: [testbed-node-5] 2025-05-03 01:13:35.984667 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-03 01:13:35.984674 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-03 01:13:35.984681 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-03 01:13:35.984687 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-03 01:13:35.984694 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.984702 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-03 01:13:35.984708 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-03 01:13:35.984715 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.984722 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-03 01:13:35.984729 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-03 01:13:35.984736 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-03 01:13:35.984743 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-03 01:13:35.984750 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-03 01:13:35.984757 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-03 01:13:35.984764 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:35.984771 | orchestrator | 2025-05-03 01:13:35.984778 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-03 01:13:35.984789 | orchestrator | 2025-05-03 01:13:35.984800 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-03 01:13:35.984807 | orchestrator | Saturday 03 May 2025 01:13:31 +0000 (0:00:01.507) 0:08:08.660 ********** 2025-05-03 01:13:35.984815 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-03 01:13:35.984822 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-03 01:13:35.984828 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:35.984835 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-03 01:13:35.984842 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-03 01:13:35.984849 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:35.984859 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-03 01:13:39.022140 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-03 01:13:39.022301 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:39.022322 | orchestrator | 2025-05-03 01:13:39.022339 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-03 01:13:39.022355 | orchestrator | 2025-05-03 01:13:39.022369 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-03 01:13:39.022384 | orchestrator | Saturday 03 May 2025 01:13:31 +0000 (0:00:00.639) 0:08:09.300 ********** 2025-05-03 01:13:39.022398 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:39.022412 | orchestrator | 2025-05-03 01:13:39.022426 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-03 01:13:39.022440 | orchestrator | 2025-05-03 01:13:39.022454 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-03 01:13:39.022470 | orchestrator | Saturday 03 May 2025 01:13:32 +0000 (0:00:00.966) 0:08:10.266 ********** 2025-05-03 01:13:39.022484 | orchestrator | skipping: [testbed-node-0] 2025-05-03 01:13:39.022498 | orchestrator | skipping: [testbed-node-1] 2025-05-03 01:13:39.022549 | orchestrator | skipping: [testbed-node-2] 2025-05-03 01:13:39.022567 | orchestrator | 2025-05-03 01:13:39.022583 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-03 01:13:39.022599 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-03 01:13:39.022618 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-03 01:13:39.022636 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-03 01:13:39.022652 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-03 01:13:39.022668 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-03 01:13:39.022685 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-03 01:13:39.022701 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-03 01:13:39.022716 | orchestrator | 2025-05-03 01:13:39.022732 | orchestrator | 2025-05-03 01:13:39.022749 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-03 01:13:39.022764 | orchestrator | Saturday 03 May 2025 01:13:33 +0000 (0:00:00.536) 0:08:10.803 ********** 2025-05-03 01:13:39.022780 | orchestrator | =============================================================================== 2025-05-03 01:13:39.022796 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.30s 2025-05-03 01:13:39.022812 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 26.58s 2025-05-03 01:13:39.022860 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.48s 2025-05-03 01:13:39.022878 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 20.57s 2025-05-03 01:13:39.022895 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.36s 2025-05-03 01:13:39.022910 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.93s 2025-05-03 01:13:39.022924 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.59s 2025-05-03 01:13:39.022976 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 15.55s 2025-05-03 01:13:39.022992 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.16s 2025-05-03 01:13:39.023007 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.61s 2025-05-03 01:13:39.023021 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.24s 2025-05-03 01:13:39.023036 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.78s 2025-05-03 01:13:39.023050 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.34s 2025-05-03 01:13:39.023065 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.05s 2025-05-03 01:13:39.023080 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.02s 2025-05-03 01:13:39.023094 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.83s 2025-05-03 01:13:39.023109 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.33s 2025-05-03 01:13:39.023123 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.33s 2025-05-03 01:13:39.023138 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 8.87s 2025-05-03 01:13:39.023153 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.45s 2025-05-03 01:13:39.023168 | orchestrator | 2025-05-03 01:13:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:39.023184 | orchestrator | 2025-05-03 01:13:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:39.023273 | orchestrator | 2025-05-03 01:13:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:42.067472 | orchestrator | 2025-05-03 01:13:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:42.067644 | orchestrator | 2025-05-03 01:13:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:45.110903 | orchestrator | 2025-05-03 01:13:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:45.111084 | orchestrator | 2025-05-03 01:13:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:48.163537 | orchestrator | 2025-05-03 01:13:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:48.163717 | orchestrator | 2025-05-03 01:13:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:51.212270 | orchestrator | 2025-05-03 01:13:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:51.212416 | orchestrator | 2025-05-03 01:13:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:51.212559 | orchestrator | 2025-05-03 01:13:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:54.265545 | orchestrator | 2025-05-03 01:13:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:13:57.310254 | orchestrator | 2025-05-03 01:13:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:13:57.310390 | orchestrator | 2025-05-03 01:13:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:00.359775 | orchestrator | 2025-05-03 01:13:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:00.359910 | orchestrator | 2025-05-03 01:14:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:03.406404 | orchestrator | 2025-05-03 01:14:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:03.406549 | orchestrator | 2025-05-03 01:14:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:06.455088 | orchestrator | 2025-05-03 01:14:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:06.455227 | orchestrator | 2025-05-03 01:14:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:09.507455 | orchestrator | 2025-05-03 01:14:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:09.507598 | orchestrator | 2025-05-03 01:14:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:12.553858 | orchestrator | 2025-05-03 01:14:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:12.554098 | orchestrator | 2025-05-03 01:14:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:15.593155 | orchestrator | 2025-05-03 01:14:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:15.593307 | orchestrator | 2025-05-03 01:14:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:18.646183 | orchestrator | 2025-05-03 01:14:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:18.646321 | orchestrator | 2025-05-03 01:14:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:21.704532 | orchestrator | 2025-05-03 01:14:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:21.704756 | orchestrator | 2025-05-03 01:14:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:24.748610 | orchestrator | 2025-05-03 01:14:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:24.748744 | orchestrator | 2025-05-03 01:14:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:27.802091 | orchestrator | 2025-05-03 01:14:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:27.802282 | orchestrator | 2025-05-03 01:14:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:30.847121 | orchestrator | 2025-05-03 01:14:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:30.847330 | orchestrator | 2025-05-03 01:14:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:33.890351 | orchestrator | 2025-05-03 01:14:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:33.890496 | orchestrator | 2025-05-03 01:14:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:36.940391 | orchestrator | 2025-05-03 01:14:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:36.940559 | orchestrator | 2025-05-03 01:14:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:39.985153 | orchestrator | 2025-05-03 01:14:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:39.985296 | orchestrator | 2025-05-03 01:14:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:43.034419 | orchestrator | 2025-05-03 01:14:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:43.034540 | orchestrator | 2025-05-03 01:14:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:46.079400 | orchestrator | 2025-05-03 01:14:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:46.079545 | orchestrator | 2025-05-03 01:14:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:49.125375 | orchestrator | 2025-05-03 01:14:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:49.125514 | orchestrator | 2025-05-03 01:14:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:52.171421 | orchestrator | 2025-05-03 01:14:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:52.171565 | orchestrator | 2025-05-03 01:14:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:55.224404 | orchestrator | 2025-05-03 01:14:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:55.224537 | orchestrator | 2025-05-03 01:14:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:14:55.225165 | orchestrator | 2025-05-03 01:14:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:14:58.280601 | orchestrator | 2025-05-03 01:14:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:01.337581 | orchestrator | 2025-05-03 01:14:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:01.337714 | orchestrator | 2025-05-03 01:15:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:04.390316 | orchestrator | 2025-05-03 01:15:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:04.390461 | orchestrator | 2025-05-03 01:15:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:07.444964 | orchestrator | 2025-05-03 01:15:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:07.445114 | orchestrator | 2025-05-03 01:15:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:10.495757 | orchestrator | 2025-05-03 01:15:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:10.495931 | orchestrator | 2025-05-03 01:15:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:13.542590 | orchestrator | 2025-05-03 01:15:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:13.542728 | orchestrator | 2025-05-03 01:15:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:16.595328 | orchestrator | 2025-05-03 01:15:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:16.595479 | orchestrator | 2025-05-03 01:15:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:19.652610 | orchestrator | 2025-05-03 01:15:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:19.652753 | orchestrator | 2025-05-03 01:15:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:22.706700 | orchestrator | 2025-05-03 01:15:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:22.706842 | orchestrator | 2025-05-03 01:15:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:25.761507 | orchestrator | 2025-05-03 01:15:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:25.761644 | orchestrator | 2025-05-03 01:15:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:28.814546 | orchestrator | 2025-05-03 01:15:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:28.814698 | orchestrator | 2025-05-03 01:15:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:31.861098 | orchestrator | 2025-05-03 01:15:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:31.861251 | orchestrator | 2025-05-03 01:15:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:34.910659 | orchestrator | 2025-05-03 01:15:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:34.910816 | orchestrator | 2025-05-03 01:15:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:37.968979 | orchestrator | 2025-05-03 01:15:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:37.969138 | orchestrator | 2025-05-03 01:15:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:41.027945 | orchestrator | 2025-05-03 01:15:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:41.028088 | orchestrator | 2025-05-03 01:15:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:44.073205 | orchestrator | 2025-05-03 01:15:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:44.073344 | orchestrator | 2025-05-03 01:15:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:47.124069 | orchestrator | 2025-05-03 01:15:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:47.124225 | orchestrator | 2025-05-03 01:15:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:50.173275 | orchestrator | 2025-05-03 01:15:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:50.173445 | orchestrator | 2025-05-03 01:15:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:53.223930 | orchestrator | 2025-05-03 01:15:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:53.224097 | orchestrator | 2025-05-03 01:15:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:56.279545 | orchestrator | 2025-05-03 01:15:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:56.279688 | orchestrator | 2025-05-03 01:15:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:15:59.337533 | orchestrator | 2025-05-03 01:15:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:15:59.337682 | orchestrator | 2025-05-03 01:15:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:02.392674 | orchestrator | 2025-05-03 01:15:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:02.392811 | orchestrator | 2025-05-03 01:16:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:05.449119 | orchestrator | 2025-05-03 01:16:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:05.449259 | orchestrator | 2025-05-03 01:16:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:08.501583 | orchestrator | 2025-05-03 01:16:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:08.501724 | orchestrator | 2025-05-03 01:16:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:11.566803 | orchestrator | 2025-05-03 01:16:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:11.567003 | orchestrator | 2025-05-03 01:16:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:14.618008 | orchestrator | 2025-05-03 01:16:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:14.618199 | orchestrator | 2025-05-03 01:16:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:17.666778 | orchestrator | 2025-05-03 01:16:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:17.666970 | orchestrator | 2025-05-03 01:16:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:20.717383 | orchestrator | 2025-05-03 01:16:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:20.717526 | orchestrator | 2025-05-03 01:16:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:20.717663 | orchestrator | 2025-05-03 01:16:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:23.766733 | orchestrator | 2025-05-03 01:16:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:26.814477 | orchestrator | 2025-05-03 01:16:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:26.814614 | orchestrator | 2025-05-03 01:16:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:29.865415 | orchestrator | 2025-05-03 01:16:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:29.865564 | orchestrator | 2025-05-03 01:16:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:32.918115 | orchestrator | 2025-05-03 01:16:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:32.918258 | orchestrator | 2025-05-03 01:16:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:35.969509 | orchestrator | 2025-05-03 01:16:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:35.969652 | orchestrator | 2025-05-03 01:16:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:39.024402 | orchestrator | 2025-05-03 01:16:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:39.024537 | orchestrator | 2025-05-03 01:16:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:42.081388 | orchestrator | 2025-05-03 01:16:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:42.081533 | orchestrator | 2025-05-03 01:16:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:45.132994 | orchestrator | 2025-05-03 01:16:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:45.133136 | orchestrator | 2025-05-03 01:16:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:48.180486 | orchestrator | 2025-05-03 01:16:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:48.180626 | orchestrator | 2025-05-03 01:16:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:51.232363 | orchestrator | 2025-05-03 01:16:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:51.232497 | orchestrator | 2025-05-03 01:16:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:54.288522 | orchestrator | 2025-05-03 01:16:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:54.288676 | orchestrator | 2025-05-03 01:16:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:16:57.346515 | orchestrator | 2025-05-03 01:16:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:16:57.346680 | orchestrator | 2025-05-03 01:16:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:00.403163 | orchestrator | 2025-05-03 01:16:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:00.403321 | orchestrator | 2025-05-03 01:17:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:03.456782 | orchestrator | 2025-05-03 01:17:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:03.456974 | orchestrator | 2025-05-03 01:17:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:06.506512 | orchestrator | 2025-05-03 01:17:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:06.506675 | orchestrator | 2025-05-03 01:17:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:09.554471 | orchestrator | 2025-05-03 01:17:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:09.554610 | orchestrator | 2025-05-03 01:17:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:12.604865 | orchestrator | 2025-05-03 01:17:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:12.605008 | orchestrator | 2025-05-03 01:17:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:15.654962 | orchestrator | 2025-05-03 01:17:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:15.655098 | orchestrator | 2025-05-03 01:17:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:18.708177 | orchestrator | 2025-05-03 01:17:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:18.708318 | orchestrator | 2025-05-03 01:17:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:21.760240 | orchestrator | 2025-05-03 01:17:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:21.760376 | orchestrator | 2025-05-03 01:17:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:24.808960 | orchestrator | 2025-05-03 01:17:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:24.809121 | orchestrator | 2025-05-03 01:17:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:27.865945 | orchestrator | 2025-05-03 01:17:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:27.866152 | orchestrator | 2025-05-03 01:17:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:30.918110 | orchestrator | 2025-05-03 01:17:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:30.918258 | orchestrator | 2025-05-03 01:17:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:33.960379 | orchestrator | 2025-05-03 01:17:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:33.960520 | orchestrator | 2025-05-03 01:17:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:37.013186 | orchestrator | 2025-05-03 01:17:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:37.013326 | orchestrator | 2025-05-03 01:17:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:40.063102 | orchestrator | 2025-05-03 01:17:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:40.063242 | orchestrator | 2025-05-03 01:17:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:43.119716 | orchestrator | 2025-05-03 01:17:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:43.119867 | orchestrator | 2025-05-03 01:17:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:46.175747 | orchestrator | 2025-05-03 01:17:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:46.175994 | orchestrator | 2025-05-03 01:17:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:49.225430 | orchestrator | 2025-05-03 01:17:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:49.225574 | orchestrator | 2025-05-03 01:17:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:52.278167 | orchestrator | 2025-05-03 01:17:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:52.278311 | orchestrator | 2025-05-03 01:17:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:55.325477 | orchestrator | 2025-05-03 01:17:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:55.325613 | orchestrator | 2025-05-03 01:17:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:17:58.376924 | orchestrator | 2025-05-03 01:17:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:17:58.377064 | orchestrator | 2025-05-03 01:17:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:01.424447 | orchestrator | 2025-05-03 01:17:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:01.424559 | orchestrator | 2025-05-03 01:18:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:04.476680 | orchestrator | 2025-05-03 01:18:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:04.476816 | orchestrator | 2025-05-03 01:18:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:07.524060 | orchestrator | 2025-05-03 01:18:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:07.524198 | orchestrator | 2025-05-03 01:18:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:10.568439 | orchestrator | 2025-05-03 01:18:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:10.568590 | orchestrator | 2025-05-03 01:18:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:13.613196 | orchestrator | 2025-05-03 01:18:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:13.613336 | orchestrator | 2025-05-03 01:18:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:16.669580 | orchestrator | 2025-05-03 01:18:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:16.669720 | orchestrator | 2025-05-03 01:18:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:19.718421 | orchestrator | 2025-05-03 01:18:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:19.718561 | orchestrator | 2025-05-03 01:18:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:22.771809 | orchestrator | 2025-05-03 01:18:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:22.772044 | orchestrator | 2025-05-03 01:18:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:25.823825 | orchestrator | 2025-05-03 01:18:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:25.824027 | orchestrator | 2025-05-03 01:18:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:28.874247 | orchestrator | 2025-05-03 01:18:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:28.874384 | orchestrator | 2025-05-03 01:18:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:31.931249 | orchestrator | 2025-05-03 01:18:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:31.931398 | orchestrator | 2025-05-03 01:18:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:34.977431 | orchestrator | 2025-05-03 01:18:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:34.977569 | orchestrator | 2025-05-03 01:18:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:38.034810 | orchestrator | 2025-05-03 01:18:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:38.035007 | orchestrator | 2025-05-03 01:18:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:41.073122 | orchestrator | 2025-05-03 01:18:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:41.073256 | orchestrator | 2025-05-03 01:18:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:44.122330 | orchestrator | 2025-05-03 01:18:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:44.122479 | orchestrator | 2025-05-03 01:18:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:47.167730 | orchestrator | 2025-05-03 01:18:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:47.167835 | orchestrator | 2025-05-03 01:18:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:50.215409 | orchestrator | 2025-05-03 01:18:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:50.215552 | orchestrator | 2025-05-03 01:18:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:53.262498 | orchestrator | 2025-05-03 01:18:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:53.262640 | orchestrator | 2025-05-03 01:18:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:56.318767 | orchestrator | 2025-05-03 01:18:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:56.318942 | orchestrator | 2025-05-03 01:18:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:18:59.367945 | orchestrator | 2025-05-03 01:18:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:18:59.368087 | orchestrator | 2025-05-03 01:18:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:02.417011 | orchestrator | 2025-05-03 01:18:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:02.417159 | orchestrator | 2025-05-03 01:19:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:05.464994 | orchestrator | 2025-05-03 01:19:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:05.465134 | orchestrator | 2025-05-03 01:19:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:08.512332 | orchestrator | 2025-05-03 01:19:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:08.512481 | orchestrator | 2025-05-03 01:19:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:11.562763 | orchestrator | 2025-05-03 01:19:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:11.563003 | orchestrator | 2025-05-03 01:19:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:14.609661 | orchestrator | 2025-05-03 01:19:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:14.609802 | orchestrator | 2025-05-03 01:19:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:17.664433 | orchestrator | 2025-05-03 01:19:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:17.664605 | orchestrator | 2025-05-03 01:19:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:20.707160 | orchestrator | 2025-05-03 01:19:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:20.707346 | orchestrator | 2025-05-03 01:19:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:23.760208 | orchestrator | 2025-05-03 01:19:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:23.760374 | orchestrator | 2025-05-03 01:19:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:26.806853 | orchestrator | 2025-05-03 01:19:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:26.807111 | orchestrator | 2025-05-03 01:19:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:29.849010 | orchestrator | 2025-05-03 01:19:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:29.849176 | orchestrator | 2025-05-03 01:19:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:32.899257 | orchestrator | 2025-05-03 01:19:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:32.899410 | orchestrator | 2025-05-03 01:19:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:35.948413 | orchestrator | 2025-05-03 01:19:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:35.948554 | orchestrator | 2025-05-03 01:19:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:39.006423 | orchestrator | 2025-05-03 01:19:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:39.006586 | orchestrator | 2025-05-03 01:19:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:42.068251 | orchestrator | 2025-05-03 01:19:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:42.068456 | orchestrator | 2025-05-03 01:19:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:45.117097 | orchestrator | 2025-05-03 01:19:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:45.117241 | orchestrator | 2025-05-03 01:19:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:48.165442 | orchestrator | 2025-05-03 01:19:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:48.165559 | orchestrator | 2025-05-03 01:19:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:51.217092 | orchestrator | 2025-05-03 01:19:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:51.217241 | orchestrator | 2025-05-03 01:19:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:54.268547 | orchestrator | 2025-05-03 01:19:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:54.268688 | orchestrator | 2025-05-03 01:19:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:19:57.316368 | orchestrator | 2025-05-03 01:19:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:19:57.316511 | orchestrator | 2025-05-03 01:19:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:00.367982 | orchestrator | 2025-05-03 01:19:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:00.368128 | orchestrator | 2025-05-03 01:20:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:03.420874 | orchestrator | 2025-05-03 01:20:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:03.421051 | orchestrator | 2025-05-03 01:20:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:06.467419 | orchestrator | 2025-05-03 01:20:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:06.467582 | orchestrator | 2025-05-03 01:20:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:09.515738 | orchestrator | 2025-05-03 01:20:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:09.515879 | orchestrator | 2025-05-03 01:20:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:12.567054 | orchestrator | 2025-05-03 01:20:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:12.567198 | orchestrator | 2025-05-03 01:20:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:15.613147 | orchestrator | 2025-05-03 01:20:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:15.613289 | orchestrator | 2025-05-03 01:20:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:18.661523 | orchestrator | 2025-05-03 01:20:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:18.661664 | orchestrator | 2025-05-03 01:20:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:21.718623 | orchestrator | 2025-05-03 01:20:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:21.718763 | orchestrator | 2025-05-03 01:20:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:24.771532 | orchestrator | 2025-05-03 01:20:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:24.771677 | orchestrator | 2025-05-03 01:20:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:27.826789 | orchestrator | 2025-05-03 01:20:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:27.826971 | orchestrator | 2025-05-03 01:20:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:30.877165 | orchestrator | 2025-05-03 01:20:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:30.877305 | orchestrator | 2025-05-03 01:20:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:33.928624 | orchestrator | 2025-05-03 01:20:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:33.928769 | orchestrator | 2025-05-03 01:20:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:36.976695 | orchestrator | 2025-05-03 01:20:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:36.976839 | orchestrator | 2025-05-03 01:20:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:40.028302 | orchestrator | 2025-05-03 01:20:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:40.028450 | orchestrator | 2025-05-03 01:20:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:43.077876 | orchestrator | 2025-05-03 01:20:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:43.078119 | orchestrator | 2025-05-03 01:20:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:46.122307 | orchestrator | 2025-05-03 01:20:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:46.122485 | orchestrator | 2025-05-03 01:20:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:49.166282 | orchestrator | 2025-05-03 01:20:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:49.166431 | orchestrator | 2025-05-03 01:20:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:52.220065 | orchestrator | 2025-05-03 01:20:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:52.220231 | orchestrator | 2025-05-03 01:20:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:55.266688 | orchestrator | 2025-05-03 01:20:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:55.266831 | orchestrator | 2025-05-03 01:20:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:20:58.320418 | orchestrator | 2025-05-03 01:20:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:20:58.320557 | orchestrator | 2025-05-03 01:20:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:01.370904 | orchestrator | 2025-05-03 01:20:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:01.371166 | orchestrator | 2025-05-03 01:21:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:04.429547 | orchestrator | 2025-05-03 01:21:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:04.429709 | orchestrator | 2025-05-03 01:21:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:07.478582 | orchestrator | 2025-05-03 01:21:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:07.478725 | orchestrator | 2025-05-03 01:21:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:10.523235 | orchestrator | 2025-05-03 01:21:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:10.523381 | orchestrator | 2025-05-03 01:21:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:13.572892 | orchestrator | 2025-05-03 01:21:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:13.573095 | orchestrator | 2025-05-03 01:21:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:16.613538 | orchestrator | 2025-05-03 01:21:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:16.613688 | orchestrator | 2025-05-03 01:21:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:19.664509 | orchestrator | 2025-05-03 01:21:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:19.664646 | orchestrator | 2025-05-03 01:21:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:22.718632 | orchestrator | 2025-05-03 01:21:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:22.718787 | orchestrator | 2025-05-03 01:21:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:25.770322 | orchestrator | 2025-05-03 01:21:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:25.770462 | orchestrator | 2025-05-03 01:21:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:28.826311 | orchestrator | 2025-05-03 01:21:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:28.826453 | orchestrator | 2025-05-03 01:21:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:31.873994 | orchestrator | 2025-05-03 01:21:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:31.874177 | orchestrator | 2025-05-03 01:21:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:34.930798 | orchestrator | 2025-05-03 01:21:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:34.931006 | orchestrator | 2025-05-03 01:21:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:37.982569 | orchestrator | 2025-05-03 01:21:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:37.982675 | orchestrator | 2025-05-03 01:21:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:41.033040 | orchestrator | 2025-05-03 01:21:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:41.033150 | orchestrator | 2025-05-03 01:21:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:44.080736 | orchestrator | 2025-05-03 01:21:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:44.080876 | orchestrator | 2025-05-03 01:21:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:47.126218 | orchestrator | 2025-05-03 01:21:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:47.126363 | orchestrator | 2025-05-03 01:21:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:50.184188 | orchestrator | 2025-05-03 01:21:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:50.184343 | orchestrator | 2025-05-03 01:21:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:53.241231 | orchestrator | 2025-05-03 01:21:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:53.241374 | orchestrator | 2025-05-03 01:21:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:56.290795 | orchestrator | 2025-05-03 01:21:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:56.290992 | orchestrator | 2025-05-03 01:21:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:21:59.343849 | orchestrator | 2025-05-03 01:21:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:21:59.344039 | orchestrator | 2025-05-03 01:21:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:02.384893 | orchestrator | 2025-05-03 01:21:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:02.385100 | orchestrator | 2025-05-03 01:22:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:05.434341 | orchestrator | 2025-05-03 01:22:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:05.434484 | orchestrator | 2025-05-03 01:22:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:08.478630 | orchestrator | 2025-05-03 01:22:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:08.478772 | orchestrator | 2025-05-03 01:22:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:11.540341 | orchestrator | 2025-05-03 01:22:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:11.540485 | orchestrator | 2025-05-03 01:22:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:14.601412 | orchestrator | 2025-05-03 01:22:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:14.601551 | orchestrator | 2025-05-03 01:22:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:17.657644 | orchestrator | 2025-05-03 01:22:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:17.657847 | orchestrator | 2025-05-03 01:22:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:20.712509 | orchestrator | 2025-05-03 01:22:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:20.712725 | orchestrator | 2025-05-03 01:22:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:23.770515 | orchestrator | 2025-05-03 01:22:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:23.770692 | orchestrator | 2025-05-03 01:22:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:26.821974 | orchestrator | 2025-05-03 01:22:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:26.822207 | orchestrator | 2025-05-03 01:22:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:29.881023 | orchestrator | 2025-05-03 01:22:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:29.881172 | orchestrator | 2025-05-03 01:22:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:32.935003 | orchestrator | 2025-05-03 01:22:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:32.935142 | orchestrator | 2025-05-03 01:22:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:35.980632 | orchestrator | 2025-05-03 01:22:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:35.980780 | orchestrator | 2025-05-03 01:22:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:39.036212 | orchestrator | 2025-05-03 01:22:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:39.036354 | orchestrator | 2025-05-03 01:22:39 | INFO  | Task fb198a9b-760c-4a9d-8a87-03f6b83a4286 is in state STARTED 2025-05-03 01:22:39.037754 | orchestrator | 2025-05-03 01:22:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:42.093109 | orchestrator | 2025-05-03 01:22:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:42.093291 | orchestrator | 2025-05-03 01:22:42 | INFO  | Task fb198a9b-760c-4a9d-8a87-03f6b83a4286 is in state STARTED 2025-05-03 01:22:42.094514 | orchestrator | 2025-05-03 01:22:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:42.094734 | orchestrator | 2025-05-03 01:22:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:45.156169 | orchestrator | 2025-05-03 01:22:45 | INFO  | Task fb198a9b-760c-4a9d-8a87-03f6b83a4286 is in state STARTED 2025-05-03 01:22:45.157240 | orchestrator | 2025-05-03 01:22:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:48.208268 | orchestrator | 2025-05-03 01:22:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:48.208490 | orchestrator | 2025-05-03 01:22:48 | INFO  | Task fb198a9b-760c-4a9d-8a87-03f6b83a4286 is in state STARTED 2025-05-03 01:22:48.209877 | orchestrator | 2025-05-03 01:22:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:51.260153 | orchestrator | 2025-05-03 01:22:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:51.260352 | orchestrator | 2025-05-03 01:22:51 | INFO  | Task fb198a9b-760c-4a9d-8a87-03f6b83a4286 is in state SUCCESS 2025-05-03 01:22:51.261803 | orchestrator | 2025-05-03 01:22:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:54.312136 | orchestrator | 2025-05-03 01:22:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:54.312317 | orchestrator | 2025-05-03 01:22:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:22:57.364998 | orchestrator | 2025-05-03 01:22:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:22:57.365136 | orchestrator | 2025-05-03 01:22:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:00.412144 | orchestrator | 2025-05-03 01:22:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:00.412289 | orchestrator | 2025-05-03 01:23:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:03.462432 | orchestrator | 2025-05-03 01:23:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:03.462573 | orchestrator | 2025-05-03 01:23:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:06.508206 | orchestrator | 2025-05-03 01:23:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:06.508334 | orchestrator | 2025-05-03 01:23:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:09.553065 | orchestrator | 2025-05-03 01:23:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:09.553207 | orchestrator | 2025-05-03 01:23:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:12.603789 | orchestrator | 2025-05-03 01:23:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:12.603993 | orchestrator | 2025-05-03 01:23:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:15.658289 | orchestrator | 2025-05-03 01:23:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:15.658424 | orchestrator | 2025-05-03 01:23:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:18.714968 | orchestrator | 2025-05-03 01:23:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:18.715142 | orchestrator | 2025-05-03 01:23:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:21.764087 | orchestrator | 2025-05-03 01:23:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:21.764224 | orchestrator | 2025-05-03 01:23:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:24.813858 | orchestrator | 2025-05-03 01:23:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:24.814087 | orchestrator | 2025-05-03 01:23:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:27.859340 | orchestrator | 2025-05-03 01:23:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:27.859477 | orchestrator | 2025-05-03 01:23:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:30.908646 | orchestrator | 2025-05-03 01:23:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:30.908800 | orchestrator | 2025-05-03 01:23:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:33.953640 | orchestrator | 2025-05-03 01:23:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:33.953780 | orchestrator | 2025-05-03 01:23:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:37.004432 | orchestrator | 2025-05-03 01:23:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:37.004574 | orchestrator | 2025-05-03 01:23:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:40.052417 | orchestrator | 2025-05-03 01:23:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:40.052563 | orchestrator | 2025-05-03 01:23:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:43.106315 | orchestrator | 2025-05-03 01:23:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:43.106570 | orchestrator | 2025-05-03 01:23:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:46.157121 | orchestrator | 2025-05-03 01:23:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:46.157286 | orchestrator | 2025-05-03 01:23:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:49.204551 | orchestrator | 2025-05-03 01:23:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:49.204741 | orchestrator | 2025-05-03 01:23:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:52.261227 | orchestrator | 2025-05-03 01:23:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:52.261363 | orchestrator | 2025-05-03 01:23:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:55.318144 | orchestrator | 2025-05-03 01:23:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:55.318247 | orchestrator | 2025-05-03 01:23:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:23:58.372800 | orchestrator | 2025-05-03 01:23:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:23:58.372989 | orchestrator | 2025-05-03 01:23:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:01.431176 | orchestrator | 2025-05-03 01:23:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:01.431323 | orchestrator | 2025-05-03 01:24:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:04.480662 | orchestrator | 2025-05-03 01:24:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:04.480808 | orchestrator | 2025-05-03 01:24:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:07.529815 | orchestrator | 2025-05-03 01:24:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:07.530006 | orchestrator | 2025-05-03 01:24:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:10.581652 | orchestrator | 2025-05-03 01:24:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:10.581785 | orchestrator | 2025-05-03 01:24:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:13.630126 | orchestrator | 2025-05-03 01:24:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:13.630284 | orchestrator | 2025-05-03 01:24:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:16.673663 | orchestrator | 2025-05-03 01:24:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:16.673801 | orchestrator | 2025-05-03 01:24:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:19.726512 | orchestrator | 2025-05-03 01:24:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:19.726651 | orchestrator | 2025-05-03 01:24:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:22.775435 | orchestrator | 2025-05-03 01:24:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:22.775572 | orchestrator | 2025-05-03 01:24:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:25.823400 | orchestrator | 2025-05-03 01:24:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:25.823541 | orchestrator | 2025-05-03 01:24:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:28.873869 | orchestrator | 2025-05-03 01:24:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:28.874094 | orchestrator | 2025-05-03 01:24:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:31.922538 | orchestrator | 2025-05-03 01:24:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:31.922675 | orchestrator | 2025-05-03 01:24:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:34.988496 | orchestrator | 2025-05-03 01:24:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:34.988661 | orchestrator | 2025-05-03 01:24:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:38.037980 | orchestrator | 2025-05-03 01:24:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:38.038204 | orchestrator | 2025-05-03 01:24:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:41.088701 | orchestrator | 2025-05-03 01:24:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:41.088842 | orchestrator | 2025-05-03 01:24:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:44.143976 | orchestrator | 2025-05-03 01:24:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:44.144120 | orchestrator | 2025-05-03 01:24:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:47.197015 | orchestrator | 2025-05-03 01:24:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:47.197107 | orchestrator | 2025-05-03 01:24:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:50.247885 | orchestrator | 2025-05-03 01:24:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:50.248109 | orchestrator | 2025-05-03 01:24:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:53.301442 | orchestrator | 2025-05-03 01:24:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:53.301583 | orchestrator | 2025-05-03 01:24:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:56.354165 | orchestrator | 2025-05-03 01:24:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:56.354312 | orchestrator | 2025-05-03 01:24:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:24:59.412740 | orchestrator | 2025-05-03 01:24:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:24:59.412903 | orchestrator | 2025-05-03 01:24:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:02.457465 | orchestrator | 2025-05-03 01:24:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:02.457619 | orchestrator | 2025-05-03 01:25:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:05.511015 | orchestrator | 2025-05-03 01:25:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:05.511119 | orchestrator | 2025-05-03 01:25:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:08.564326 | orchestrator | 2025-05-03 01:25:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:08.564469 | orchestrator | 2025-05-03 01:25:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:11.619160 | orchestrator | 2025-05-03 01:25:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:11.619306 | orchestrator | 2025-05-03 01:25:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:14.666269 | orchestrator | 2025-05-03 01:25:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:14.666361 | orchestrator | 2025-05-03 01:25:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:17.715168 | orchestrator | 2025-05-03 01:25:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:17.715314 | orchestrator | 2025-05-03 01:25:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:20.762406 | orchestrator | 2025-05-03 01:25:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:20.762543 | orchestrator | 2025-05-03 01:25:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:23.810454 | orchestrator | 2025-05-03 01:25:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:23.810603 | orchestrator | 2025-05-03 01:25:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:26.857083 | orchestrator | 2025-05-03 01:25:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:26.857221 | orchestrator | 2025-05-03 01:25:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:29.909430 | orchestrator | 2025-05-03 01:25:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:29.909573 | orchestrator | 2025-05-03 01:25:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:32.957397 | orchestrator | 2025-05-03 01:25:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:32.957535 | orchestrator | 2025-05-03 01:25:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:36.021341 | orchestrator | 2025-05-03 01:25:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:36.021481 | orchestrator | 2025-05-03 01:25:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:39.073996 | orchestrator | 2025-05-03 01:25:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:39.074169 | orchestrator | 2025-05-03 01:25:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:42.116764 | orchestrator | 2025-05-03 01:25:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:42.116903 | orchestrator | 2025-05-03 01:25:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:45.168530 | orchestrator | 2025-05-03 01:25:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:45.168667 | orchestrator | 2025-05-03 01:25:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:48.211548 | orchestrator | 2025-05-03 01:25:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:48.211687 | orchestrator | 2025-05-03 01:25:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:51.262241 | orchestrator | 2025-05-03 01:25:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:51.262423 | orchestrator | 2025-05-03 01:25:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:54.311019 | orchestrator | 2025-05-03 01:25:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:54.311170 | orchestrator | 2025-05-03 01:25:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:25:57.358682 | orchestrator | 2025-05-03 01:25:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:25:57.358821 | orchestrator | 2025-05-03 01:25:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:00.409037 | orchestrator | 2025-05-03 01:25:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:00.409163 | orchestrator | 2025-05-03 01:26:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:03.456899 | orchestrator | 2025-05-03 01:26:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:03.457093 | orchestrator | 2025-05-03 01:26:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:06.502093 | orchestrator | 2025-05-03 01:26:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:06.502239 | orchestrator | 2025-05-03 01:26:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:09.552491 | orchestrator | 2025-05-03 01:26:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:09.552624 | orchestrator | 2025-05-03 01:26:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:12.598148 | orchestrator | 2025-05-03 01:26:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:12.598347 | orchestrator | 2025-05-03 01:26:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:15.653988 | orchestrator | 2025-05-03 01:26:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:15.654192 | orchestrator | 2025-05-03 01:26:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:18.710330 | orchestrator | 2025-05-03 01:26:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:18.710476 | orchestrator | 2025-05-03 01:26:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:21.758325 | orchestrator | 2025-05-03 01:26:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:21.758466 | orchestrator | 2025-05-03 01:26:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:24.810130 | orchestrator | 2025-05-03 01:26:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:24.810278 | orchestrator | 2025-05-03 01:26:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:27.859220 | orchestrator | 2025-05-03 01:26:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:27.859359 | orchestrator | 2025-05-03 01:26:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:30.920207 | orchestrator | 2025-05-03 01:26:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:30.920352 | orchestrator | 2025-05-03 01:26:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:33.975077 | orchestrator | 2025-05-03 01:26:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:33.975215 | orchestrator | 2025-05-03 01:26:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:37.032415 | orchestrator | 2025-05-03 01:26:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:37.032580 | orchestrator | 2025-05-03 01:26:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:40.080683 | orchestrator | 2025-05-03 01:26:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:40.080822 | orchestrator | 2025-05-03 01:26:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:43.129451 | orchestrator | 2025-05-03 01:26:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:43.129598 | orchestrator | 2025-05-03 01:26:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:46.183407 | orchestrator | 2025-05-03 01:26:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:46.183571 | orchestrator | 2025-05-03 01:26:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:49.229208 | orchestrator | 2025-05-03 01:26:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:49.229346 | orchestrator | 2025-05-03 01:26:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:52.305787 | orchestrator | 2025-05-03 01:26:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:52.305925 | orchestrator | 2025-05-03 01:26:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:55.362312 | orchestrator | 2025-05-03 01:26:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:55.362494 | orchestrator | 2025-05-03 01:26:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:26:58.424544 | orchestrator | 2025-05-03 01:26:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:26:58.424679 | orchestrator | 2025-05-03 01:26:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:01.477803 | orchestrator | 2025-05-03 01:26:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:01.477941 | orchestrator | 2025-05-03 01:27:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:01.478316 | orchestrator | 2025-05-03 01:27:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:04.518814 | orchestrator | 2025-05-03 01:27:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:07.576322 | orchestrator | 2025-05-03 01:27:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:07.576467 | orchestrator | 2025-05-03 01:27:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:10.626095 | orchestrator | 2025-05-03 01:27:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:10.626230 | orchestrator | 2025-05-03 01:27:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:13.683930 | orchestrator | 2025-05-03 01:27:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:13.684129 | orchestrator | 2025-05-03 01:27:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:16.733878 | orchestrator | 2025-05-03 01:27:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:16.734138 | orchestrator | 2025-05-03 01:27:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:19.779191 | orchestrator | 2025-05-03 01:27:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:19.779324 | orchestrator | 2025-05-03 01:27:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:22.833887 | orchestrator | 2025-05-03 01:27:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:22.834129 | orchestrator | 2025-05-03 01:27:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:25.878746 | orchestrator | 2025-05-03 01:27:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:25.878926 | orchestrator | 2025-05-03 01:27:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:28.930636 | orchestrator | 2025-05-03 01:27:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:28.930786 | orchestrator | 2025-05-03 01:27:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:31.980128 | orchestrator | 2025-05-03 01:27:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:31.980304 | orchestrator | 2025-05-03 01:27:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:35.023917 | orchestrator | 2025-05-03 01:27:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:35.024133 | orchestrator | 2025-05-03 01:27:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:38.076833 | orchestrator | 2025-05-03 01:27:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:38.076958 | orchestrator | 2025-05-03 01:27:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:41.119857 | orchestrator | 2025-05-03 01:27:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:41.120062 | orchestrator | 2025-05-03 01:27:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:44.173317 | orchestrator | 2025-05-03 01:27:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:44.173460 | orchestrator | 2025-05-03 01:27:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:47.225663 | orchestrator | 2025-05-03 01:27:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:47.225779 | orchestrator | 2025-05-03 01:27:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:50.269603 | orchestrator | 2025-05-03 01:27:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:50.269744 | orchestrator | 2025-05-03 01:27:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:53.321225 | orchestrator | 2025-05-03 01:27:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:53.321362 | orchestrator | 2025-05-03 01:27:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:56.365446 | orchestrator | 2025-05-03 01:27:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:56.365598 | orchestrator | 2025-05-03 01:27:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:27:59.410442 | orchestrator | 2025-05-03 01:27:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:27:59.410580 | orchestrator | 2025-05-03 01:27:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:02.455957 | orchestrator | 2025-05-03 01:27:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:02.456129 | orchestrator | 2025-05-03 01:28:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:05.506309 | orchestrator | 2025-05-03 01:28:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:05.506451 | orchestrator | 2025-05-03 01:28:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:08.557180 | orchestrator | 2025-05-03 01:28:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:08.557353 | orchestrator | 2025-05-03 01:28:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:11.608651 | orchestrator | 2025-05-03 01:28:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:11.608794 | orchestrator | 2025-05-03 01:28:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:14.664134 | orchestrator | 2025-05-03 01:28:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:14.664321 | orchestrator | 2025-05-03 01:28:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:17.707521 | orchestrator | 2025-05-03 01:28:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:17.707693 | orchestrator | 2025-05-03 01:28:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:20.760291 | orchestrator | 2025-05-03 01:28:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:20.760435 | orchestrator | 2025-05-03 01:28:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:23.804917 | orchestrator | 2025-05-03 01:28:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:23.805108 | orchestrator | 2025-05-03 01:28:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:26.857919 | orchestrator | 2025-05-03 01:28:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:26.858171 | orchestrator | 2025-05-03 01:28:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:29.904497 | orchestrator | 2025-05-03 01:28:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:29.904659 | orchestrator | 2025-05-03 01:28:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:32.954218 | orchestrator | 2025-05-03 01:28:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:32.954409 | orchestrator | 2025-05-03 01:28:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:36.010147 | orchestrator | 2025-05-03 01:28:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:36.010303 | orchestrator | 2025-05-03 01:28:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:39.061221 | orchestrator | 2025-05-03 01:28:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:39.061360 | orchestrator | 2025-05-03 01:28:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:42.107889 | orchestrator | 2025-05-03 01:28:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:42.108065 | orchestrator | 2025-05-03 01:28:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:45.153823 | orchestrator | 2025-05-03 01:28:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:45.154102 | orchestrator | 2025-05-03 01:28:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:48.199890 | orchestrator | 2025-05-03 01:28:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:48.200127 | orchestrator | 2025-05-03 01:28:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:51.256944 | orchestrator | 2025-05-03 01:28:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:51.257282 | orchestrator | 2025-05-03 01:28:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:54.309399 | orchestrator | 2025-05-03 01:28:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:54.309540 | orchestrator | 2025-05-03 01:28:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:28:57.358546 | orchestrator | 2025-05-03 01:28:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:28:57.358698 | orchestrator | 2025-05-03 01:28:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:00.403603 | orchestrator | 2025-05-03 01:28:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:00.403741 | orchestrator | 2025-05-03 01:29:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:03.452313 | orchestrator | 2025-05-03 01:29:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:03.452536 | orchestrator | 2025-05-03 01:29:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:06.509739 | orchestrator | 2025-05-03 01:29:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:06.509890 | orchestrator | 2025-05-03 01:29:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:09.555285 | orchestrator | 2025-05-03 01:29:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:09.555464 | orchestrator | 2025-05-03 01:29:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:12.606496 | orchestrator | 2025-05-03 01:29:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:12.606632 | orchestrator | 2025-05-03 01:29:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:15.648806 | orchestrator | 2025-05-03 01:29:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:15.648951 | orchestrator | 2025-05-03 01:29:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:18.702270 | orchestrator | 2025-05-03 01:29:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:18.702429 | orchestrator | 2025-05-03 01:29:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:21.753081 | orchestrator | 2025-05-03 01:29:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:21.753232 | orchestrator | 2025-05-03 01:29:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:24.798234 | orchestrator | 2025-05-03 01:29:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:24.798433 | orchestrator | 2025-05-03 01:29:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:27.845256 | orchestrator | 2025-05-03 01:29:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:27.845404 | orchestrator | 2025-05-03 01:29:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:30.896579 | orchestrator | 2025-05-03 01:29:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:30.896744 | orchestrator | 2025-05-03 01:29:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:33.942073 | orchestrator | 2025-05-03 01:29:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:33.942259 | orchestrator | 2025-05-03 01:29:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:36.996932 | orchestrator | 2025-05-03 01:29:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:36.997108 | orchestrator | 2025-05-03 01:29:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:40.045949 | orchestrator | 2025-05-03 01:29:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:40.046124 | orchestrator | 2025-05-03 01:29:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:43.097152 | orchestrator | 2025-05-03 01:29:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:43.097290 | orchestrator | 2025-05-03 01:29:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:46.145121 | orchestrator | 2025-05-03 01:29:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:46.145264 | orchestrator | 2025-05-03 01:29:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:49.196684 | orchestrator | 2025-05-03 01:29:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:49.196871 | orchestrator | 2025-05-03 01:29:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:52.258142 | orchestrator | 2025-05-03 01:29:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:52.258281 | orchestrator | 2025-05-03 01:29:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:55.305179 | orchestrator | 2025-05-03 01:29:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:55.305355 | orchestrator | 2025-05-03 01:29:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:29:58.354777 | orchestrator | 2025-05-03 01:29:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:29:58.354925 | orchestrator | 2025-05-03 01:29:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:01.399895 | orchestrator | 2025-05-03 01:29:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:01.400064 | orchestrator | 2025-05-03 01:30:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:04.447742 | orchestrator | 2025-05-03 01:30:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:04.447880 | orchestrator | 2025-05-03 01:30:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:07.499589 | orchestrator | 2025-05-03 01:30:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:07.499731 | orchestrator | 2025-05-03 01:30:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:10.547772 | orchestrator | 2025-05-03 01:30:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:10.547918 | orchestrator | 2025-05-03 01:30:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:13.594592 | orchestrator | 2025-05-03 01:30:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:13.594732 | orchestrator | 2025-05-03 01:30:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:16.645863 | orchestrator | 2025-05-03 01:30:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:16.646148 | orchestrator | 2025-05-03 01:30:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:19.693489 | orchestrator | 2025-05-03 01:30:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:19.693670 | orchestrator | 2025-05-03 01:30:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:22.748389 | orchestrator | 2025-05-03 01:30:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:22.748535 | orchestrator | 2025-05-03 01:30:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:25.799248 | orchestrator | 2025-05-03 01:30:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:25.799389 | orchestrator | 2025-05-03 01:30:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:28.847837 | orchestrator | 2025-05-03 01:30:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:28.848051 | orchestrator | 2025-05-03 01:30:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:31.898377 | orchestrator | 2025-05-03 01:30:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:31.898543 | orchestrator | 2025-05-03 01:30:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:34.947053 | orchestrator | 2025-05-03 01:30:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:34.947270 | orchestrator | 2025-05-03 01:30:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:37.995129 | orchestrator | 2025-05-03 01:30:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:37.995271 | orchestrator | 2025-05-03 01:30:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:41.043166 | orchestrator | 2025-05-03 01:30:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:41.043313 | orchestrator | 2025-05-03 01:30:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:44.090504 | orchestrator | 2025-05-03 01:30:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:44.090653 | orchestrator | 2025-05-03 01:30:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:47.138270 | orchestrator | 2025-05-03 01:30:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:47.138417 | orchestrator | 2025-05-03 01:30:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:50.193948 | orchestrator | 2025-05-03 01:30:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:50.194185 | orchestrator | 2025-05-03 01:30:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:53.240539 | orchestrator | 2025-05-03 01:30:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:53.240682 | orchestrator | 2025-05-03 01:30:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:56.285175 | orchestrator | 2025-05-03 01:30:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:56.285314 | orchestrator | 2025-05-03 01:30:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:30:59.337556 | orchestrator | 2025-05-03 01:30:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:30:59.337709 | orchestrator | 2025-05-03 01:30:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:02.388336 | orchestrator | 2025-05-03 01:30:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:02.388470 | orchestrator | 2025-05-03 01:31:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:05.447909 | orchestrator | 2025-05-03 01:31:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:05.448098 | orchestrator | 2025-05-03 01:31:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:08.505478 | orchestrator | 2025-05-03 01:31:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:08.505612 | orchestrator | 2025-05-03 01:31:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:11.570273 | orchestrator | 2025-05-03 01:31:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:11.570504 | orchestrator | 2025-05-03 01:31:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:14.619559 | orchestrator | 2025-05-03 01:31:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:14.619739 | orchestrator | 2025-05-03 01:31:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:17.677149 | orchestrator | 2025-05-03 01:31:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:17.677281 | orchestrator | 2025-05-03 01:31:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:20.745623 | orchestrator | 2025-05-03 01:31:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:20.745750 | orchestrator | 2025-05-03 01:31:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:23.803093 | orchestrator | 2025-05-03 01:31:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:23.803237 | orchestrator | 2025-05-03 01:31:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:26.864999 | orchestrator | 2025-05-03 01:31:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:26.865200 | orchestrator | 2025-05-03 01:31:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:29.913572 | orchestrator | 2025-05-03 01:31:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:29.913717 | orchestrator | 2025-05-03 01:31:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:32.965851 | orchestrator | 2025-05-03 01:31:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:32.965998 | orchestrator | 2025-05-03 01:31:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:36.034804 | orchestrator | 2025-05-03 01:31:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:36.034966 | orchestrator | 2025-05-03 01:31:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:39.086373 | orchestrator | 2025-05-03 01:31:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:39.086537 | orchestrator | 2025-05-03 01:31:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:42.137430 | orchestrator | 2025-05-03 01:31:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:42.137606 | orchestrator | 2025-05-03 01:31:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:45.203634 | orchestrator | 2025-05-03 01:31:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:45.203802 | orchestrator | 2025-05-03 01:31:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:48.252360 | orchestrator | 2025-05-03 01:31:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:48.252533 | orchestrator | 2025-05-03 01:31:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:51.303208 | orchestrator | 2025-05-03 01:31:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:51.303349 | orchestrator | 2025-05-03 01:31:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:54.348798 | orchestrator | 2025-05-03 01:31:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:54.348940 | orchestrator | 2025-05-03 01:31:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:31:57.405233 | orchestrator | 2025-05-03 01:31:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:31:57.405373 | orchestrator | 2025-05-03 01:31:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:00.478781 | orchestrator | 2025-05-03 01:31:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:00.478906 | orchestrator | 2025-05-03 01:32:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:03.527531 | orchestrator | 2025-05-03 01:32:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:03.527674 | orchestrator | 2025-05-03 01:32:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:06.575904 | orchestrator | 2025-05-03 01:32:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:06.576074 | orchestrator | 2025-05-03 01:32:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:09.629388 | orchestrator | 2025-05-03 01:32:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:09.629529 | orchestrator | 2025-05-03 01:32:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:12.676933 | orchestrator | 2025-05-03 01:32:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:12.677058 | orchestrator | 2025-05-03 01:32:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:15.722332 | orchestrator | 2025-05-03 01:32:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:15.722474 | orchestrator | 2025-05-03 01:32:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:18.773956 | orchestrator | 2025-05-03 01:32:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:18.774227 | orchestrator | 2025-05-03 01:32:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:21.821887 | orchestrator | 2025-05-03 01:32:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:21.822161 | orchestrator | 2025-05-03 01:32:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:24.878811 | orchestrator | 2025-05-03 01:32:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:24.878944 | orchestrator | 2025-05-03 01:32:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:27.932412 | orchestrator | 2025-05-03 01:32:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:27.932543 | orchestrator | 2025-05-03 01:32:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:30.978662 | orchestrator | 2025-05-03 01:32:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:30.978806 | orchestrator | 2025-05-03 01:32:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:34.030998 | orchestrator | 2025-05-03 01:32:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:34.031188 | orchestrator | 2025-05-03 01:32:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:37.077036 | orchestrator | 2025-05-03 01:32:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:37.077205 | orchestrator | 2025-05-03 01:32:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:40.130233 | orchestrator | 2025-05-03 01:32:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:40.130512 | orchestrator | 2025-05-03 01:32:40 | INFO  | Task 590213d7-f0f0-46fb-bfc9-8446a96cb3bd is in state STARTED 2025-05-03 01:32:40.130653 | orchestrator | 2025-05-03 01:32:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:40.130685 | orchestrator | 2025-05-03 01:32:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:43.191955 | orchestrator | 2025-05-03 01:32:43 | INFO  | Task 590213d7-f0f0-46fb-bfc9-8446a96cb3bd is in state STARTED 2025-05-03 01:32:43.194013 | orchestrator | 2025-05-03 01:32:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:43.194326 | orchestrator | 2025-05-03 01:32:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:46.255136 | orchestrator | 2025-05-03 01:32:46 | INFO  | Task 590213d7-f0f0-46fb-bfc9-8446a96cb3bd is in state STARTED 2025-05-03 01:32:46.256362 | orchestrator | 2025-05-03 01:32:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:46.256563 | orchestrator | 2025-05-03 01:32:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:49.305804 | orchestrator | 2025-05-03 01:32:49 | INFO  | Task 590213d7-f0f0-46fb-bfc9-8446a96cb3bd is in state SUCCESS 2025-05-03 01:32:49.306859 | orchestrator | 2025-05-03 01:32:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:52.357251 | orchestrator | 2025-05-03 01:32:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:52.357390 | orchestrator | 2025-05-03 01:32:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:55.410148 | orchestrator | 2025-05-03 01:32:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:55.410341 | orchestrator | 2025-05-03 01:32:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:32:58.466893 | orchestrator | 2025-05-03 01:32:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:32:58.467045 | orchestrator | 2025-05-03 01:32:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:01.517296 | orchestrator | 2025-05-03 01:32:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:01.517438 | orchestrator | 2025-05-03 01:33:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:04.557972 | orchestrator | 2025-05-03 01:33:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:04.558235 | orchestrator | 2025-05-03 01:33:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:07.603706 | orchestrator | 2025-05-03 01:33:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:07.603872 | orchestrator | 2025-05-03 01:33:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:10.653403 | orchestrator | 2025-05-03 01:33:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:10.653538 | orchestrator | 2025-05-03 01:33:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:13.709454 | orchestrator | 2025-05-03 01:33:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:13.709604 | orchestrator | 2025-05-03 01:33:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:16.753647 | orchestrator | 2025-05-03 01:33:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:16.753788 | orchestrator | 2025-05-03 01:33:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:19.801137 | orchestrator | 2025-05-03 01:33:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:19.801336 | orchestrator | 2025-05-03 01:33:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:22.846600 | orchestrator | 2025-05-03 01:33:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:22.846740 | orchestrator | 2025-05-03 01:33:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:25.900514 | orchestrator | 2025-05-03 01:33:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:25.900682 | orchestrator | 2025-05-03 01:33:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:28.949160 | orchestrator | 2025-05-03 01:33:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:28.949375 | orchestrator | 2025-05-03 01:33:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:32.001403 | orchestrator | 2025-05-03 01:33:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:32.001549 | orchestrator | 2025-05-03 01:33:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:35.048685 | orchestrator | 2025-05-03 01:33:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:35.048860 | orchestrator | 2025-05-03 01:33:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:38.098536 | orchestrator | 2025-05-03 01:33:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:38.098688 | orchestrator | 2025-05-03 01:33:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:41.144171 | orchestrator | 2025-05-03 01:33:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:41.144393 | orchestrator | 2025-05-03 01:33:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:44.192175 | orchestrator | 2025-05-03 01:33:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:44.192326 | orchestrator | 2025-05-03 01:33:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:47.247393 | orchestrator | 2025-05-03 01:33:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:47.247532 | orchestrator | 2025-05-03 01:33:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:50.299654 | orchestrator | 2025-05-03 01:33:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:50.299795 | orchestrator | 2025-05-03 01:33:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:53.353839 | orchestrator | 2025-05-03 01:33:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:53.353981 | orchestrator | 2025-05-03 01:33:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:56.401519 | orchestrator | 2025-05-03 01:33:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:56.401657 | orchestrator | 2025-05-03 01:33:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:33:59.450404 | orchestrator | 2025-05-03 01:33:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:33:59.450539 | orchestrator | 2025-05-03 01:33:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:02.505902 | orchestrator | 2025-05-03 01:33:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:02.506108 | orchestrator | 2025-05-03 01:34:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:05.555469 | orchestrator | 2025-05-03 01:34:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:05.555630 | orchestrator | 2025-05-03 01:34:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:08.610769 | orchestrator | 2025-05-03 01:34:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:08.610939 | orchestrator | 2025-05-03 01:34:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:11.660352 | orchestrator | 2025-05-03 01:34:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:11.660518 | orchestrator | 2025-05-03 01:34:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:14.711973 | orchestrator | 2025-05-03 01:34:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:14.712120 | orchestrator | 2025-05-03 01:34:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:17.763478 | orchestrator | 2025-05-03 01:34:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:17.763596 | orchestrator | 2025-05-03 01:34:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:20.808500 | orchestrator | 2025-05-03 01:34:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:20.808645 | orchestrator | 2025-05-03 01:34:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:23.851526 | orchestrator | 2025-05-03 01:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:23.851669 | orchestrator | 2025-05-03 01:34:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:26.902951 | orchestrator | 2025-05-03 01:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:26.903094 | orchestrator | 2025-05-03 01:34:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:29.957041 | orchestrator | 2025-05-03 01:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:29.957185 | orchestrator | 2025-05-03 01:34:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:33.003789 | orchestrator | 2025-05-03 01:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:33.003933 | orchestrator | 2025-05-03 01:34:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:36.054690 | orchestrator | 2025-05-03 01:34:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:36.054832 | orchestrator | 2025-05-03 01:34:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:39.105703 | orchestrator | 2025-05-03 01:34:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:39.105845 | orchestrator | 2025-05-03 01:34:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:42.157103 | orchestrator | 2025-05-03 01:34:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:42.157244 | orchestrator | 2025-05-03 01:34:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:45.206496 | orchestrator | 2025-05-03 01:34:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:45.206644 | orchestrator | 2025-05-03 01:34:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:48.248886 | orchestrator | 2025-05-03 01:34:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:48.249067 | orchestrator | 2025-05-03 01:34:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:51.294286 | orchestrator | 2025-05-03 01:34:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:51.294505 | orchestrator | 2025-05-03 01:34:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:54.342706 | orchestrator | 2025-05-03 01:34:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:54.342845 | orchestrator | 2025-05-03 01:34:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:34:57.391840 | orchestrator | 2025-05-03 01:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:34:57.391996 | orchestrator | 2025-05-03 01:34:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:00.441461 | orchestrator | 2025-05-03 01:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:00.441598 | orchestrator | 2025-05-03 01:35:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:03.492171 | orchestrator | 2025-05-03 01:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:03.492274 | orchestrator | 2025-05-03 01:35:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:06.543053 | orchestrator | 2025-05-03 01:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:06.543159 | orchestrator | 2025-05-03 01:35:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:09.593054 | orchestrator | 2025-05-03 01:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:09.593230 | orchestrator | 2025-05-03 01:35:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:12.644057 | orchestrator | 2025-05-03 01:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:12.644232 | orchestrator | 2025-05-03 01:35:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:15.697173 | orchestrator | 2025-05-03 01:35:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:15.697355 | orchestrator | 2025-05-03 01:35:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:18.750575 | orchestrator | 2025-05-03 01:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:18.750769 | orchestrator | 2025-05-03 01:35:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:21.795420 | orchestrator | 2025-05-03 01:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:21.795574 | orchestrator | 2025-05-03 01:35:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:24.846565 | orchestrator | 2025-05-03 01:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:24.846740 | orchestrator | 2025-05-03 01:35:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:27.897109 | orchestrator | 2025-05-03 01:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:27.897273 | orchestrator | 2025-05-03 01:35:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:30.948136 | orchestrator | 2025-05-03 01:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:30.948318 | orchestrator | 2025-05-03 01:35:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:33.997676 | orchestrator | 2025-05-03 01:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:33.997844 | orchestrator | 2025-05-03 01:35:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:37.044568 | orchestrator | 2025-05-03 01:35:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:37.044734 | orchestrator | 2025-05-03 01:35:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:40.090329 | orchestrator | 2025-05-03 01:35:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:40.090532 | orchestrator | 2025-05-03 01:35:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:43.141288 | orchestrator | 2025-05-03 01:35:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:43.141518 | orchestrator | 2025-05-03 01:35:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:46.202356 | orchestrator | 2025-05-03 01:35:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:46.202602 | orchestrator | 2025-05-03 01:35:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:49.243184 | orchestrator | 2025-05-03 01:35:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:49.243327 | orchestrator | 2025-05-03 01:35:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:52.294958 | orchestrator | 2025-05-03 01:35:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:52.295094 | orchestrator | 2025-05-03 01:35:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:55.344010 | orchestrator | 2025-05-03 01:35:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:55.344167 | orchestrator | 2025-05-03 01:35:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:35:58.395968 | orchestrator | 2025-05-03 01:35:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:35:58.429752 | orchestrator | 2025-05-03 01:35:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:01.450248 | orchestrator | 2025-05-03 01:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:01.450410 | orchestrator | 2025-05-03 01:36:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:04.503749 | orchestrator | 2025-05-03 01:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:04.503880 | orchestrator | 2025-05-03 01:36:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:04.503954 | orchestrator | 2025-05-03 01:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:07.556327 | orchestrator | 2025-05-03 01:36:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:10.602937 | orchestrator | 2025-05-03 01:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:10.603102 | orchestrator | 2025-05-03 01:36:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:13.651626 | orchestrator | 2025-05-03 01:36:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:13.651762 | orchestrator | 2025-05-03 01:36:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:16.691910 | orchestrator | 2025-05-03 01:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:16.692057 | orchestrator | 2025-05-03 01:36:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:19.741564 | orchestrator | 2025-05-03 01:36:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:19.741702 | orchestrator | 2025-05-03 01:36:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:22.790886 | orchestrator | 2025-05-03 01:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:22.791032 | orchestrator | 2025-05-03 01:36:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:25.851079 | orchestrator | 2025-05-03 01:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:25.851221 | orchestrator | 2025-05-03 01:36:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:28.904539 | orchestrator | 2025-05-03 01:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:28.904692 | orchestrator | 2025-05-03 01:36:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:31.960707 | orchestrator | 2025-05-03 01:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:31.960846 | orchestrator | 2025-05-03 01:36:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:35.019738 | orchestrator | 2025-05-03 01:36:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:35.019873 | orchestrator | 2025-05-03 01:36:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:38.068996 | orchestrator | 2025-05-03 01:36:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:38.069149 | orchestrator | 2025-05-03 01:36:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:41.111389 | orchestrator | 2025-05-03 01:36:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:41.111586 | orchestrator | 2025-05-03 01:36:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:44.163920 | orchestrator | 2025-05-03 01:36:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:44.164076 | orchestrator | 2025-05-03 01:36:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:47.211631 | orchestrator | 2025-05-03 01:36:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:47.211732 | orchestrator | 2025-05-03 01:36:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:50.260764 | orchestrator | 2025-05-03 01:36:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:50.260898 | orchestrator | 2025-05-03 01:36:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:50.261041 | orchestrator | 2025-05-03 01:36:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:53.311955 | orchestrator | 2025-05-03 01:36:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:56.362750 | orchestrator | 2025-05-03 01:36:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:56.362888 | orchestrator | 2025-05-03 01:36:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:36:59.406858 | orchestrator | 2025-05-03 01:36:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:36:59.407000 | orchestrator | 2025-05-03 01:36:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:02.452160 | orchestrator | 2025-05-03 01:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:02.452303 | orchestrator | 2025-05-03 01:37:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:05.502069 | orchestrator | 2025-05-03 01:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:05.502211 | orchestrator | 2025-05-03 01:37:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:08.562583 | orchestrator | 2025-05-03 01:37:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:08.562728 | orchestrator | 2025-05-03 01:37:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:11.622284 | orchestrator | 2025-05-03 01:37:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:11.622435 | orchestrator | 2025-05-03 01:37:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:14.676463 | orchestrator | 2025-05-03 01:37:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:14.676715 | orchestrator | 2025-05-03 01:37:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:17.734978 | orchestrator | 2025-05-03 01:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:17.735124 | orchestrator | 2025-05-03 01:37:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:20.794361 | orchestrator | 2025-05-03 01:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:20.794616 | orchestrator | 2025-05-03 01:37:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:23.846268 | orchestrator | 2025-05-03 01:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:23.846500 | orchestrator | 2025-05-03 01:37:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:26.900300 | orchestrator | 2025-05-03 01:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:26.900444 | orchestrator | 2025-05-03 01:37:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:29.946502 | orchestrator | 2025-05-03 01:37:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:29.946704 | orchestrator | 2025-05-03 01:37:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:32.998181 | orchestrator | 2025-05-03 01:37:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:32.998319 | orchestrator | 2025-05-03 01:37:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:36.059090 | orchestrator | 2025-05-03 01:37:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:36.059234 | orchestrator | 2025-05-03 01:37:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:39.105064 | orchestrator | 2025-05-03 01:37:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:39.105205 | orchestrator | 2025-05-03 01:37:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:42.190998 | orchestrator | 2025-05-03 01:37:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:42.191142 | orchestrator | 2025-05-03 01:37:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:45.239810 | orchestrator | 2025-05-03 01:37:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:45.239944 | orchestrator | 2025-05-03 01:37:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:48.289644 | orchestrator | 2025-05-03 01:37:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:48.289783 | orchestrator | 2025-05-03 01:37:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:51.356298 | orchestrator | 2025-05-03 01:37:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:51.356441 | orchestrator | 2025-05-03 01:37:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:54.402192 | orchestrator | 2025-05-03 01:37:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:54.402350 | orchestrator | 2025-05-03 01:37:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:37:57.450339 | orchestrator | 2025-05-03 01:37:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:37:57.450484 | orchestrator | 2025-05-03 01:37:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:00.497362 | orchestrator | 2025-05-03 01:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:00.497509 | orchestrator | 2025-05-03 01:38:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:03.554335 | orchestrator | 2025-05-03 01:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:03.554556 | orchestrator | 2025-05-03 01:38:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:06.610294 | orchestrator | 2025-05-03 01:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:06.610443 | orchestrator | 2025-05-03 01:38:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:09.654410 | orchestrator | 2025-05-03 01:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:09.654567 | orchestrator | 2025-05-03 01:38:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:12.711522 | orchestrator | 2025-05-03 01:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:12.711734 | orchestrator | 2025-05-03 01:38:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:15.771643 | orchestrator | 2025-05-03 01:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:15.771793 | orchestrator | 2025-05-03 01:38:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:18.836756 | orchestrator | 2025-05-03 01:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:18.836926 | orchestrator | 2025-05-03 01:38:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:21.887406 | orchestrator | 2025-05-03 01:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:21.887544 | orchestrator | 2025-05-03 01:38:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:24.937062 | orchestrator | 2025-05-03 01:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:24.937228 | orchestrator | 2025-05-03 01:38:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:27.990130 | orchestrator | 2025-05-03 01:38:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:27.990267 | orchestrator | 2025-05-03 01:38:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:31.040449 | orchestrator | 2025-05-03 01:38:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:31.040595 | orchestrator | 2025-05-03 01:38:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:34.091278 | orchestrator | 2025-05-03 01:38:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:34.091418 | orchestrator | 2025-05-03 01:38:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:37.136901 | orchestrator | 2025-05-03 01:38:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:37.137040 | orchestrator | 2025-05-03 01:38:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:40.191502 | orchestrator | 2025-05-03 01:38:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:40.191727 | orchestrator | 2025-05-03 01:38:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:43.238518 | orchestrator | 2025-05-03 01:38:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:43.238751 | orchestrator | 2025-05-03 01:38:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:46.286506 | orchestrator | 2025-05-03 01:38:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:46.286703 | orchestrator | 2025-05-03 01:38:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:49.337818 | orchestrator | 2025-05-03 01:38:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:49.337992 | orchestrator | 2025-05-03 01:38:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:52.380485 | orchestrator | 2025-05-03 01:38:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:52.380615 | orchestrator | 2025-05-03 01:38:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:55.431916 | orchestrator | 2025-05-03 01:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:55.432059 | orchestrator | 2025-05-03 01:38:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:38:58.477409 | orchestrator | 2025-05-03 01:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:38:58.477578 | orchestrator | 2025-05-03 01:38:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:01.528122 | orchestrator | 2025-05-03 01:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:01.528259 | orchestrator | 2025-05-03 01:39:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:04.579786 | orchestrator | 2025-05-03 01:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:04.579921 | orchestrator | 2025-05-03 01:39:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:04.580070 | orchestrator | 2025-05-03 01:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:07.628495 | orchestrator | 2025-05-03 01:39:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:10.681020 | orchestrator | 2025-05-03 01:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:10.681165 | orchestrator | 2025-05-03 01:39:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:13.727492 | orchestrator | 2025-05-03 01:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:13.727632 | orchestrator | 2025-05-03 01:39:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:16.776871 | orchestrator | 2025-05-03 01:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:16.777007 | orchestrator | 2025-05-03 01:39:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:19.828105 | orchestrator | 2025-05-03 01:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:19.828250 | orchestrator | 2025-05-03 01:39:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:22.876976 | orchestrator | 2025-05-03 01:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:22.877115 | orchestrator | 2025-05-03 01:39:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:25.930178 | orchestrator | 2025-05-03 01:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:25.930323 | orchestrator | 2025-05-03 01:39:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:28.978333 | orchestrator | 2025-05-03 01:39:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:28.978484 | orchestrator | 2025-05-03 01:39:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:32.032413 | orchestrator | 2025-05-03 01:39:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:32.032561 | orchestrator | 2025-05-03 01:39:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:35.077102 | orchestrator | 2025-05-03 01:39:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:35.077265 | orchestrator | 2025-05-03 01:39:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:38.123873 | orchestrator | 2025-05-03 01:39:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:38.124014 | orchestrator | 2025-05-03 01:39:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:41.169471 | orchestrator | 2025-05-03 01:39:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:41.169608 | orchestrator | 2025-05-03 01:39:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:44.216260 | orchestrator | 2025-05-03 01:39:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:44.216418 | orchestrator | 2025-05-03 01:39:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:47.265185 | orchestrator | 2025-05-03 01:39:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:47.265323 | orchestrator | 2025-05-03 01:39:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:50.319809 | orchestrator | 2025-05-03 01:39:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:50.319969 | orchestrator | 2025-05-03 01:39:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:53.374176 | orchestrator | 2025-05-03 01:39:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:53.374319 | orchestrator | 2025-05-03 01:39:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:56.427174 | orchestrator | 2025-05-03 01:39:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:56.427317 | orchestrator | 2025-05-03 01:39:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:39:59.476099 | orchestrator | 2025-05-03 01:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:39:59.476259 | orchestrator | 2025-05-03 01:39:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:02.525089 | orchestrator | 2025-05-03 01:39:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:02.525241 | orchestrator | 2025-05-03 01:40:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:05.580906 | orchestrator | 2025-05-03 01:40:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:05.581088 | orchestrator | 2025-05-03 01:40:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:08.633206 | orchestrator | 2025-05-03 01:40:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:08.633380 | orchestrator | 2025-05-03 01:40:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:11.677742 | orchestrator | 2025-05-03 01:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:11.677905 | orchestrator | 2025-05-03 01:40:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:14.731824 | orchestrator | 2025-05-03 01:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:14.731979 | orchestrator | 2025-05-03 01:40:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:17.783180 | orchestrator | 2025-05-03 01:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:17.783321 | orchestrator | 2025-05-03 01:40:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:20.832598 | orchestrator | 2025-05-03 01:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:20.832849 | orchestrator | 2025-05-03 01:40:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:23.880592 | orchestrator | 2025-05-03 01:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:23.880782 | orchestrator | 2025-05-03 01:40:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:26.929587 | orchestrator | 2025-05-03 01:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:26.929780 | orchestrator | 2025-05-03 01:40:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:29.980333 | orchestrator | 2025-05-03 01:40:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:29.980469 | orchestrator | 2025-05-03 01:40:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:33.028579 | orchestrator | 2025-05-03 01:40:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:33.028784 | orchestrator | 2025-05-03 01:40:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:36.071257 | orchestrator | 2025-05-03 01:40:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:36.071373 | orchestrator | 2025-05-03 01:40:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:39.116272 | orchestrator | 2025-05-03 01:40:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:39.116399 | orchestrator | 2025-05-03 01:40:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:42.173356 | orchestrator | 2025-05-03 01:40:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:42.173498 | orchestrator | 2025-05-03 01:40:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:45.227271 | orchestrator | 2025-05-03 01:40:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:45.227382 | orchestrator | 2025-05-03 01:40:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:45.227544 | orchestrator | 2025-05-03 01:40:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:48.276812 | orchestrator | 2025-05-03 01:40:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:51.322105 | orchestrator | 2025-05-03 01:40:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:51.322251 | orchestrator | 2025-05-03 01:40:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:54.376192 | orchestrator | 2025-05-03 01:40:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:54.376343 | orchestrator | 2025-05-03 01:40:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:40:57.428645 | orchestrator | 2025-05-03 01:40:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:40:57.428826 | orchestrator | 2025-05-03 01:40:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:00.479538 | orchestrator | 2025-05-03 01:40:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:00.479684 | orchestrator | 2025-05-03 01:41:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:03.522577 | orchestrator | 2025-05-03 01:41:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:03.522718 | orchestrator | 2025-05-03 01:41:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:06.574440 | orchestrator | 2025-05-03 01:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:06.574664 | orchestrator | 2025-05-03 01:41:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:09.625182 | orchestrator | 2025-05-03 01:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:09.625318 | orchestrator | 2025-05-03 01:41:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:12.672883 | orchestrator | 2025-05-03 01:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:12.673019 | orchestrator | 2025-05-03 01:41:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:15.723968 | orchestrator | 2025-05-03 01:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:15.724109 | orchestrator | 2025-05-03 01:41:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:18.770288 | orchestrator | 2025-05-03 01:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:18.770440 | orchestrator | 2025-05-03 01:41:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:21.819448 | orchestrator | 2025-05-03 01:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:21.819606 | orchestrator | 2025-05-03 01:41:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:24.866415 | orchestrator | 2025-05-03 01:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:24.866589 | orchestrator | 2025-05-03 01:41:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:27.911288 | orchestrator | 2025-05-03 01:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:27.911447 | orchestrator | 2025-05-03 01:41:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:30.976907 | orchestrator | 2025-05-03 01:41:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:30.977032 | orchestrator | 2025-05-03 01:41:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:34.034293 | orchestrator | 2025-05-03 01:41:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:34.034453 | orchestrator | 2025-05-03 01:41:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:37.081487 | orchestrator | 2025-05-03 01:41:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:37.081632 | orchestrator | 2025-05-03 01:41:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:40.130484 | orchestrator | 2025-05-03 01:41:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:40.130619 | orchestrator | 2025-05-03 01:41:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:43.178677 | orchestrator | 2025-05-03 01:41:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:43.178867 | orchestrator | 2025-05-03 01:41:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:46.226320 | orchestrator | 2025-05-03 01:41:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:46.226458 | orchestrator | 2025-05-03 01:41:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:49.270470 | orchestrator | 2025-05-03 01:41:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:49.270616 | orchestrator | 2025-05-03 01:41:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:52.317720 | orchestrator | 2025-05-03 01:41:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:52.317947 | orchestrator | 2025-05-03 01:41:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:55.364908 | orchestrator | 2025-05-03 01:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:55.365091 | orchestrator | 2025-05-03 01:41:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:41:58.416909 | orchestrator | 2025-05-03 01:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:41:58.417049 | orchestrator | 2025-05-03 01:41:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:01.465751 | orchestrator | 2025-05-03 01:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:01.465918 | orchestrator | 2025-05-03 01:42:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:01.466354 | orchestrator | 2025-05-03 01:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:04.512998 | orchestrator | 2025-05-03 01:42:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:07.554582 | orchestrator | 2025-05-03 01:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:07.554726 | orchestrator | 2025-05-03 01:42:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:10.606661 | orchestrator | 2025-05-03 01:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:10.606853 | orchestrator | 2025-05-03 01:42:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:13.668559 | orchestrator | 2025-05-03 01:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:13.668704 | orchestrator | 2025-05-03 01:42:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:16.721077 | orchestrator | 2025-05-03 01:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:16.721208 | orchestrator | 2025-05-03 01:42:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:19.780299 | orchestrator | 2025-05-03 01:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:19.780423 | orchestrator | 2025-05-03 01:42:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:22.837414 | orchestrator | 2025-05-03 01:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:22.837566 | orchestrator | 2025-05-03 01:42:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:25.892224 | orchestrator | 2025-05-03 01:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:25.892377 | orchestrator | 2025-05-03 01:42:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:28.946714 | orchestrator | 2025-05-03 01:42:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:28.946968 | orchestrator | 2025-05-03 01:42:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:32.009461 | orchestrator | 2025-05-03 01:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:32.009609 | orchestrator | 2025-05-03 01:42:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:32.010475 | orchestrator | 2025-05-03 01:42:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:35.081620 | orchestrator | 2025-05-03 01:42:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:38.151286 | orchestrator | 2025-05-03 01:42:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:38.151466 | orchestrator | 2025-05-03 01:42:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:38.151844 | orchestrator | 2025-05-03 01:42:38 | INFO  | Task 37b94682-c89c-46a5-887d-e50bf360fea0 is in state STARTED 2025-05-03 01:42:38.151882 | orchestrator | 2025-05-03 01:42:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:41.206275 | orchestrator | 2025-05-03 01:42:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:41.208030 | orchestrator | 2025-05-03 01:42:41 | INFO  | Task 37b94682-c89c-46a5-887d-e50bf360fea0 is in state STARTED 2025-05-03 01:42:44.256033 | orchestrator | 2025-05-03 01:42:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:44.256177 | orchestrator | 2025-05-03 01:42:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:44.258107 | orchestrator | 2025-05-03 01:42:44 | INFO  | Task 37b94682-c89c-46a5-887d-e50bf360fea0 is in state STARTED 2025-05-03 01:42:47.326953 | orchestrator | 2025-05-03 01:42:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:47.327054 | orchestrator | 2025-05-03 01:42:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:50.383413 | orchestrator | 2025-05-03 01:42:47 | INFO  | Task 37b94682-c89c-46a5-887d-e50bf360fea0 is in state STARTED 2025-05-03 01:42:50.383525 | orchestrator | 2025-05-03 01:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:50.383558 | orchestrator | 2025-05-03 01:42:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:50.383971 | orchestrator | 2025-05-03 01:42:50 | INFO  | Task 37b94682-c89c-46a5-887d-e50bf360fea0 is in state SUCCESS 2025-05-03 01:42:53.459347 | orchestrator | 2025-05-03 01:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:53.459483 | orchestrator | 2025-05-03 01:42:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:56.508642 | orchestrator | 2025-05-03 01:42:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:56.508792 | orchestrator | 2025-05-03 01:42:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:42:59.563981 | orchestrator | 2025-05-03 01:42:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:42:59.564118 | orchestrator | 2025-05-03 01:42:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:02.610774 | orchestrator | 2025-05-03 01:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:02.610971 | orchestrator | 2025-05-03 01:43:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:05.662969 | orchestrator | 2025-05-03 01:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:05.663130 | orchestrator | 2025-05-03 01:43:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:08.721237 | orchestrator | 2025-05-03 01:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:08.721384 | orchestrator | 2025-05-03 01:43:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:11.772277 | orchestrator | 2025-05-03 01:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:11.772406 | orchestrator | 2025-05-03 01:43:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:14.820602 | orchestrator | 2025-05-03 01:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:14.820775 | orchestrator | 2025-05-03 01:43:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:17.873277 | orchestrator | 2025-05-03 01:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:17.873418 | orchestrator | 2025-05-03 01:43:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:20.925660 | orchestrator | 2025-05-03 01:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:20.925804 | orchestrator | 2025-05-03 01:43:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:23.989261 | orchestrator | 2025-05-03 01:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:23.989400 | orchestrator | 2025-05-03 01:43:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:27.066230 | orchestrator | 2025-05-03 01:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:27.066333 | orchestrator | 2025-05-03 01:43:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:30.122975 | orchestrator | 2025-05-03 01:43:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:30.123116 | orchestrator | 2025-05-03 01:43:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:33.174230 | orchestrator | 2025-05-03 01:43:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:33.174366 | orchestrator | 2025-05-03 01:43:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:36.239491 | orchestrator | 2025-05-03 01:43:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:36.239656 | orchestrator | 2025-05-03 01:43:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:39.296286 | orchestrator | 2025-05-03 01:43:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:39.296453 | orchestrator | 2025-05-03 01:43:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:42.348351 | orchestrator | 2025-05-03 01:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:42.348489 | orchestrator | 2025-05-03 01:43:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:45.414269 | orchestrator | 2025-05-03 01:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:45.414419 | orchestrator | 2025-05-03 01:43:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:48.461330 | orchestrator | 2025-05-03 01:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:48.461475 | orchestrator | 2025-05-03 01:43:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:51.519199 | orchestrator | 2025-05-03 01:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:51.519311 | orchestrator | 2025-05-03 01:43:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:54.568929 | orchestrator | 2025-05-03 01:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:54.569071 | orchestrator | 2025-05-03 01:43:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:43:57.618553 | orchestrator | 2025-05-03 01:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:43:57.618692 | orchestrator | 2025-05-03 01:43:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:00.670311 | orchestrator | 2025-05-03 01:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:00.670496 | orchestrator | 2025-05-03 01:44:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:03.726117 | orchestrator | 2025-05-03 01:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:03.726261 | orchestrator | 2025-05-03 01:44:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:06.770248 | orchestrator | 2025-05-03 01:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:06.770384 | orchestrator | 2025-05-03 01:44:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:09.822483 | orchestrator | 2025-05-03 01:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:09.822655 | orchestrator | 2025-05-03 01:44:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:12.875430 | orchestrator | 2025-05-03 01:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:12.875622 | orchestrator | 2025-05-03 01:44:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:15.929234 | orchestrator | 2025-05-03 01:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:15.929371 | orchestrator | 2025-05-03 01:44:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:18.980345 | orchestrator | 2025-05-03 01:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:18.980491 | orchestrator | 2025-05-03 01:44:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:22.025715 | orchestrator | 2025-05-03 01:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:22.025847 | orchestrator | 2025-05-03 01:44:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:25.076579 | orchestrator | 2025-05-03 01:44:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:25.076758 | orchestrator | 2025-05-03 01:44:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:28.129284 | orchestrator | 2025-05-03 01:44:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:28.129426 | orchestrator | 2025-05-03 01:44:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:31.181409 | orchestrator | 2025-05-03 01:44:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:31.181563 | orchestrator | 2025-05-03 01:44:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:34.240826 | orchestrator | 2025-05-03 01:44:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:34.240974 | orchestrator | 2025-05-03 01:44:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:37.298397 | orchestrator | 2025-05-03 01:44:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:37.298530 | orchestrator | 2025-05-03 01:44:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:40.356674 | orchestrator | 2025-05-03 01:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:40.356850 | orchestrator | 2025-05-03 01:44:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:43.406322 | orchestrator | 2025-05-03 01:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:43.406442 | orchestrator | 2025-05-03 01:44:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:46.464784 | orchestrator | 2025-05-03 01:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:46.465037 | orchestrator | 2025-05-03 01:44:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:49.513778 | orchestrator | 2025-05-03 01:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:49.513997 | orchestrator | 2025-05-03 01:44:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:52.570949 | orchestrator | 2025-05-03 01:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:52.571092 | orchestrator | 2025-05-03 01:44:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:55.625584 | orchestrator | 2025-05-03 01:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:55.625745 | orchestrator | 2025-05-03 01:44:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:44:58.674492 | orchestrator | 2025-05-03 01:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:44:58.674655 | orchestrator | 2025-05-03 01:44:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:01.721012 | orchestrator | 2025-05-03 01:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:01.721156 | orchestrator | 2025-05-03 01:45:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:04.762506 | orchestrator | 2025-05-03 01:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:04.762656 | orchestrator | 2025-05-03 01:45:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:07.813223 | orchestrator | 2025-05-03 01:45:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:07.813397 | orchestrator | 2025-05-03 01:45:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:10.862090 | orchestrator | 2025-05-03 01:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:10.862232 | orchestrator | 2025-05-03 01:45:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:13.908283 | orchestrator | 2025-05-03 01:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:13.908408 | orchestrator | 2025-05-03 01:45:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:16.957060 | orchestrator | 2025-05-03 01:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:16.957204 | orchestrator | 2025-05-03 01:45:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:20.008348 | orchestrator | 2025-05-03 01:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:20.008495 | orchestrator | 2025-05-03 01:45:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:20.010276 | orchestrator | 2025-05-03 01:45:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:23.058207 | orchestrator | 2025-05-03 01:45:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:26.096281 | orchestrator | 2025-05-03 01:45:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:26.096423 | orchestrator | 2025-05-03 01:45:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:29.147329 | orchestrator | 2025-05-03 01:45:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:29.147464 | orchestrator | 2025-05-03 01:45:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:32.200620 | orchestrator | 2025-05-03 01:45:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:32.200801 | orchestrator | 2025-05-03 01:45:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:35.260047 | orchestrator | 2025-05-03 01:45:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:35.260193 | orchestrator | 2025-05-03 01:45:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:38.310710 | orchestrator | 2025-05-03 01:45:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:38.310859 | orchestrator | 2025-05-03 01:45:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:41.363355 | orchestrator | 2025-05-03 01:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:41.363498 | orchestrator | 2025-05-03 01:45:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:44.412819 | orchestrator | 2025-05-03 01:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:44.412996 | orchestrator | 2025-05-03 01:45:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:47.463389 | orchestrator | 2025-05-03 01:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:47.463530 | orchestrator | 2025-05-03 01:45:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:50.518642 | orchestrator | 2025-05-03 01:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:50.518786 | orchestrator | 2025-05-03 01:45:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:53.566001 | orchestrator | 2025-05-03 01:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:53.566204 | orchestrator | 2025-05-03 01:45:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:56.620972 | orchestrator | 2025-05-03 01:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:56.621185 | orchestrator | 2025-05-03 01:45:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:45:59.672548 | orchestrator | 2025-05-03 01:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:45:59.672685 | orchestrator | 2025-05-03 01:45:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:02.719894 | orchestrator | 2025-05-03 01:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:02.720136 | orchestrator | 2025-05-03 01:46:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:05.773834 | orchestrator | 2025-05-03 01:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:05.774008 | orchestrator | 2025-05-03 01:46:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:08.829562 | orchestrator | 2025-05-03 01:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:08.829737 | orchestrator | 2025-05-03 01:46:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:11.879658 | orchestrator | 2025-05-03 01:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:11.879794 | orchestrator | 2025-05-03 01:46:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:14.924358 | orchestrator | 2025-05-03 01:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:14.924586 | orchestrator | 2025-05-03 01:46:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:17.976431 | orchestrator | 2025-05-03 01:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:17.976611 | orchestrator | 2025-05-03 01:46:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:21.030644 | orchestrator | 2025-05-03 01:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:21.030833 | orchestrator | 2025-05-03 01:46:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:24.085339 | orchestrator | 2025-05-03 01:46:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:24.085510 | orchestrator | 2025-05-03 01:46:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:27.131694 | orchestrator | 2025-05-03 01:46:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:27.131824 | orchestrator | 2025-05-03 01:46:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:30.177776 | orchestrator | 2025-05-03 01:46:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:30.177922 | orchestrator | 2025-05-03 01:46:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:33.228599 | orchestrator | 2025-05-03 01:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:33.228742 | orchestrator | 2025-05-03 01:46:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:36.288339 | orchestrator | 2025-05-03 01:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:36.288477 | orchestrator | 2025-05-03 01:46:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:39.337368 | orchestrator | 2025-05-03 01:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:39.337512 | orchestrator | 2025-05-03 01:46:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:42.391173 | orchestrator | 2025-05-03 01:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:42.391298 | orchestrator | 2025-05-03 01:46:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:45.439694 | orchestrator | 2025-05-03 01:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:45.439839 | orchestrator | 2025-05-03 01:46:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:48.493425 | orchestrator | 2025-05-03 01:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:48.493567 | orchestrator | 2025-05-03 01:46:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:51.544004 | orchestrator | 2025-05-03 01:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:51.544147 | orchestrator | 2025-05-03 01:46:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:54.593090 | orchestrator | 2025-05-03 01:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:54.593219 | orchestrator | 2025-05-03 01:46:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:46:57.644050 | orchestrator | 2025-05-03 01:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:46:57.644195 | orchestrator | 2025-05-03 01:46:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:00.686343 | orchestrator | 2025-05-03 01:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:00.686470 | orchestrator | 2025-05-03 01:47:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:03.739074 | orchestrator | 2025-05-03 01:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:03.739248 | orchestrator | 2025-05-03 01:47:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:06.789030 | orchestrator | 2025-05-03 01:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:06.789176 | orchestrator | 2025-05-03 01:47:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:09.839937 | orchestrator | 2025-05-03 01:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:09.840131 | orchestrator | 2025-05-03 01:47:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:12.892220 | orchestrator | 2025-05-03 01:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:12.892366 | orchestrator | 2025-05-03 01:47:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:15.944233 | orchestrator | 2025-05-03 01:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:15.944383 | orchestrator | 2025-05-03 01:47:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:18.997046 | orchestrator | 2025-05-03 01:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:18.997199 | orchestrator | 2025-05-03 01:47:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:22.050625 | orchestrator | 2025-05-03 01:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:22.050767 | orchestrator | 2025-05-03 01:47:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:25.094402 | orchestrator | 2025-05-03 01:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:25.094500 | orchestrator | 2025-05-03 01:47:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:28.150434 | orchestrator | 2025-05-03 01:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:28.150581 | orchestrator | 2025-05-03 01:47:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:31.200276 | orchestrator | 2025-05-03 01:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:31.200436 | orchestrator | 2025-05-03 01:47:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:34.250370 | orchestrator | 2025-05-03 01:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:34.250538 | orchestrator | 2025-05-03 01:47:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:37.303110 | orchestrator | 2025-05-03 01:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:37.303249 | orchestrator | 2025-05-03 01:47:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:40.360391 | orchestrator | 2025-05-03 01:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:40.360554 | orchestrator | 2025-05-03 01:47:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:43.409431 | orchestrator | 2025-05-03 01:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:43.409571 | orchestrator | 2025-05-03 01:47:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:46.452845 | orchestrator | 2025-05-03 01:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:46.453043 | orchestrator | 2025-05-03 01:47:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:49.504658 | orchestrator | 2025-05-03 01:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:49.504820 | orchestrator | 2025-05-03 01:47:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:52.555918 | orchestrator | 2025-05-03 01:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:52.556145 | orchestrator | 2025-05-03 01:47:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:55.606938 | orchestrator | 2025-05-03 01:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:55.607152 | orchestrator | 2025-05-03 01:47:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:47:58.657620 | orchestrator | 2025-05-03 01:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:47:58.657767 | orchestrator | 2025-05-03 01:47:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:01.711449 | orchestrator | 2025-05-03 01:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:01.711600 | orchestrator | 2025-05-03 01:48:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:04.762198 | orchestrator | 2025-05-03 01:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:04.762387 | orchestrator | 2025-05-03 01:48:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:07.815521 | orchestrator | 2025-05-03 01:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:07.815658 | orchestrator | 2025-05-03 01:48:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:10.863559 | orchestrator | 2025-05-03 01:48:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:10.863701 | orchestrator | 2025-05-03 01:48:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:13.911946 | orchestrator | 2025-05-03 01:48:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:13.912135 | orchestrator | 2025-05-03 01:48:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:16.963533 | orchestrator | 2025-05-03 01:48:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:16.963696 | orchestrator | 2025-05-03 01:48:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:20.015351 | orchestrator | 2025-05-03 01:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:20.015518 | orchestrator | 2025-05-03 01:48:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:23.064135 | orchestrator | 2025-05-03 01:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:23.064334 | orchestrator | 2025-05-03 01:48:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:26.105388 | orchestrator | 2025-05-03 01:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:26.105530 | orchestrator | 2025-05-03 01:48:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:26.105844 | orchestrator | 2025-05-03 01:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:29.145416 | orchestrator | 2025-05-03 01:48:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:32.200272 | orchestrator | 2025-05-03 01:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:32.200408 | orchestrator | 2025-05-03 01:48:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:35.259443 | orchestrator | 2025-05-03 01:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:35.259636 | orchestrator | 2025-05-03 01:48:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:35.259766 | orchestrator | 2025-05-03 01:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:38.315231 | orchestrator | 2025-05-03 01:48:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:41.361462 | orchestrator | 2025-05-03 01:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:41.361626 | orchestrator | 2025-05-03 01:48:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:44.414869 | orchestrator | 2025-05-03 01:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:44.415051 | orchestrator | 2025-05-03 01:48:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:47.466624 | orchestrator | 2025-05-03 01:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:47.466838 | orchestrator | 2025-05-03 01:48:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:50.509106 | orchestrator | 2025-05-03 01:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:50.509266 | orchestrator | 2025-05-03 01:48:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:53.559616 | orchestrator | 2025-05-03 01:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:53.559768 | orchestrator | 2025-05-03 01:48:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:56.609915 | orchestrator | 2025-05-03 01:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:56.610154 | orchestrator | 2025-05-03 01:48:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:48:59.661905 | orchestrator | 2025-05-03 01:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:48:59.662170 | orchestrator | 2025-05-03 01:48:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:02.717669 | orchestrator | 2025-05-03 01:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:02.717810 | orchestrator | 2025-05-03 01:49:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:05.760773 | orchestrator | 2025-05-03 01:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:05.760916 | orchestrator | 2025-05-03 01:49:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:08.806356 | orchestrator | 2025-05-03 01:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:08.806491 | orchestrator | 2025-05-03 01:49:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:11.854398 | orchestrator | 2025-05-03 01:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:11.854544 | orchestrator | 2025-05-03 01:49:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:14.898942 | orchestrator | 2025-05-03 01:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:14.899163 | orchestrator | 2025-05-03 01:49:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:17.954163 | orchestrator | 2025-05-03 01:49:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:17.954306 | orchestrator | 2025-05-03 01:49:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:21.001681 | orchestrator | 2025-05-03 01:49:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:21.001810 | orchestrator | 2025-05-03 01:49:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:24.056975 | orchestrator | 2025-05-03 01:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:24.057151 | orchestrator | 2025-05-03 01:49:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:27.102393 | orchestrator | 2025-05-03 01:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:27.102568 | orchestrator | 2025-05-03 01:49:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:30.154654 | orchestrator | 2025-05-03 01:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:30.154799 | orchestrator | 2025-05-03 01:49:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:33.205611 | orchestrator | 2025-05-03 01:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:33.205752 | orchestrator | 2025-05-03 01:49:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:36.256216 | orchestrator | 2025-05-03 01:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:36.256364 | orchestrator | 2025-05-03 01:49:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:39.308506 | orchestrator | 2025-05-03 01:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:39.308642 | orchestrator | 2025-05-03 01:49:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:42.351192 | orchestrator | 2025-05-03 01:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:42.351335 | orchestrator | 2025-05-03 01:49:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:45.404612 | orchestrator | 2025-05-03 01:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:45.404755 | orchestrator | 2025-05-03 01:49:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:48.455415 | orchestrator | 2025-05-03 01:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:48.455560 | orchestrator | 2025-05-03 01:49:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:51.505585 | orchestrator | 2025-05-03 01:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:51.505724 | orchestrator | 2025-05-03 01:49:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:54.555625 | orchestrator | 2025-05-03 01:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:54.555771 | orchestrator | 2025-05-03 01:49:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:49:57.606848 | orchestrator | 2025-05-03 01:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:49:57.606993 | orchestrator | 2025-05-03 01:49:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:00.654398 | orchestrator | 2025-05-03 01:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:00.655230 | orchestrator | 2025-05-03 01:50:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:03.706485 | orchestrator | 2025-05-03 01:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:03.706636 | orchestrator | 2025-05-03 01:50:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:06.747172 | orchestrator | 2025-05-03 01:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:06.747341 | orchestrator | 2025-05-03 01:50:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:09.796455 | orchestrator | 2025-05-03 01:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:09.796596 | orchestrator | 2025-05-03 01:50:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:12.844987 | orchestrator | 2025-05-03 01:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:12.845183 | orchestrator | 2025-05-03 01:50:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:15.898251 | orchestrator | 2025-05-03 01:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:15.898387 | orchestrator | 2025-05-03 01:50:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:18.949659 | orchestrator | 2025-05-03 01:50:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:18.949802 | orchestrator | 2025-05-03 01:50:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:22.007940 | orchestrator | 2025-05-03 01:50:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:22.008111 | orchestrator | 2025-05-03 01:50:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:25.064128 | orchestrator | 2025-05-03 01:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:25.064275 | orchestrator | 2025-05-03 01:50:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:28.119693 | orchestrator | 2025-05-03 01:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:28.119840 | orchestrator | 2025-05-03 01:50:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:31.168658 | orchestrator | 2025-05-03 01:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:31.168808 | orchestrator | 2025-05-03 01:50:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:31.168894 | orchestrator | 2025-05-03 01:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:34.226533 | orchestrator | 2025-05-03 01:50:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:37.275732 | orchestrator | 2025-05-03 01:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:37.275877 | orchestrator | 2025-05-03 01:50:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:40.319562 | orchestrator | 2025-05-03 01:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:40.319733 | orchestrator | 2025-05-03 01:50:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:43.370202 | orchestrator | 2025-05-03 01:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:43.370364 | orchestrator | 2025-05-03 01:50:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:46.419835 | orchestrator | 2025-05-03 01:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:46.419975 | orchestrator | 2025-05-03 01:50:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:49.471001 | orchestrator | 2025-05-03 01:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:49.471183 | orchestrator | 2025-05-03 01:50:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:52.527480 | orchestrator | 2025-05-03 01:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:52.527623 | orchestrator | 2025-05-03 01:50:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:55.579208 | orchestrator | 2025-05-03 01:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:55.579355 | orchestrator | 2025-05-03 01:50:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:50:58.643618 | orchestrator | 2025-05-03 01:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:50:58.643754 | orchestrator | 2025-05-03 01:50:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:01.696619 | orchestrator | 2025-05-03 01:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:01.696755 | orchestrator | 2025-05-03 01:51:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:04.754595 | orchestrator | 2025-05-03 01:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:04.754740 | orchestrator | 2025-05-03 01:51:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:07.807822 | orchestrator | 2025-05-03 01:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:07.807970 | orchestrator | 2025-05-03 01:51:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:10.861118 | orchestrator | 2025-05-03 01:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:10.861305 | orchestrator | 2025-05-03 01:51:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:13.911583 | orchestrator | 2025-05-03 01:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:13.911750 | orchestrator | 2025-05-03 01:51:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:16.969232 | orchestrator | 2025-05-03 01:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:16.969399 | orchestrator | 2025-05-03 01:51:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:20.020363 | orchestrator | 2025-05-03 01:51:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:20.020510 | orchestrator | 2025-05-03 01:51:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:23.070993 | orchestrator | 2025-05-03 01:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:23.071192 | orchestrator | 2025-05-03 01:51:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:26.115762 | orchestrator | 2025-05-03 01:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:26.115902 | orchestrator | 2025-05-03 01:51:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:29.163849 | orchestrator | 2025-05-03 01:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:29.164010 | orchestrator | 2025-05-03 01:51:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:32.214944 | orchestrator | 2025-05-03 01:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:32.215120 | orchestrator | 2025-05-03 01:51:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:35.265643 | orchestrator | 2025-05-03 01:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:35.265777 | orchestrator | 2025-05-03 01:51:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:38.309887 | orchestrator | 2025-05-03 01:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:38.310148 | orchestrator | 2025-05-03 01:51:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:41.355519 | orchestrator | 2025-05-03 01:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:41.355655 | orchestrator | 2025-05-03 01:51:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:41.355739 | orchestrator | 2025-05-03 01:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:44.402748 | orchestrator | 2025-05-03 01:51:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:47.456585 | orchestrator | 2025-05-03 01:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:47.456722 | orchestrator | 2025-05-03 01:51:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:50.511832 | orchestrator | 2025-05-03 01:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:50.511981 | orchestrator | 2025-05-03 01:51:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:53.571587 | orchestrator | 2025-05-03 01:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:53.571728 | orchestrator | 2025-05-03 01:51:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:56.621759 | orchestrator | 2025-05-03 01:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:56.622635 | orchestrator | 2025-05-03 01:51:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:51:59.670869 | orchestrator | 2025-05-03 01:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:51:59.670964 | orchestrator | 2025-05-03 01:51:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:02.722424 | orchestrator | 2025-05-03 01:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:02.722565 | orchestrator | 2025-05-03 01:52:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:05.773847 | orchestrator | 2025-05-03 01:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:05.773985 | orchestrator | 2025-05-03 01:52:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:08.831886 | orchestrator | 2025-05-03 01:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:08.832033 | orchestrator | 2025-05-03 01:52:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:11.888234 | orchestrator | 2025-05-03 01:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:11.888407 | orchestrator | 2025-05-03 01:52:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:14.943560 | orchestrator | 2025-05-03 01:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:14.943704 | orchestrator | 2025-05-03 01:52:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:17.996546 | orchestrator | 2025-05-03 01:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:17.996683 | orchestrator | 2025-05-03 01:52:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:21.041766 | orchestrator | 2025-05-03 01:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:21.041912 | orchestrator | 2025-05-03 01:52:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:24.092453 | orchestrator | 2025-05-03 01:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:24.092585 | orchestrator | 2025-05-03 01:52:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:27.140144 | orchestrator | 2025-05-03 01:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:27.140292 | orchestrator | 2025-05-03 01:52:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:30.189498 | orchestrator | 2025-05-03 01:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:30.189643 | orchestrator | 2025-05-03 01:52:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:33.234814 | orchestrator | 2025-05-03 01:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:33.234957 | orchestrator | 2025-05-03 01:52:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:36.283706 | orchestrator | 2025-05-03 01:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:36.283848 | orchestrator | 2025-05-03 01:52:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:39.344512 | orchestrator | 2025-05-03 01:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:39.344656 | orchestrator | 2025-05-03 01:52:39 | INFO  | Task 9e5dce63-eb47-43e2-b6af-ecbe70230261 is in state STARTED 2025-05-03 01:52:39.346109 | orchestrator | 2025-05-03 01:52:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:42.402658 | orchestrator | 2025-05-03 01:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:42.402824 | orchestrator | 2025-05-03 01:52:42 | INFO  | Task 9e5dce63-eb47-43e2-b6af-ecbe70230261 is in state STARTED 2025-05-03 01:52:42.403791 | orchestrator | 2025-05-03 01:52:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:42.404538 | orchestrator | 2025-05-03 01:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:45.457183 | orchestrator | 2025-05-03 01:52:45 | INFO  | Task 9e5dce63-eb47-43e2-b6af-ecbe70230261 is in state STARTED 2025-05-03 01:52:45.458171 | orchestrator | 2025-05-03 01:52:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:48.514504 | orchestrator | 2025-05-03 01:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:48.514615 | orchestrator | 2025-05-03 01:52:48 | INFO  | Task 9e5dce63-eb47-43e2-b6af-ecbe70230261 is in state STARTED 2025-05-03 01:52:48.516698 | orchestrator | 2025-05-03 01:52:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:51.568558 | orchestrator | 2025-05-03 01:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:51.568705 | orchestrator | 2025-05-03 01:52:51 | INFO  | Task 9e5dce63-eb47-43e2-b6af-ecbe70230261 is in state SUCCESS 2025-05-03 01:52:51.571654 | orchestrator | 2025-05-03 01:52:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:54.625639 | orchestrator | 2025-05-03 01:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:54.625776 | orchestrator | 2025-05-03 01:52:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:52:57.677462 | orchestrator | 2025-05-03 01:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:52:57.677608 | orchestrator | 2025-05-03 01:52:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:00.722386 | orchestrator | 2025-05-03 01:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:00.722548 | orchestrator | 2025-05-03 01:53:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:03.771280 | orchestrator | 2025-05-03 01:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:03.771419 | orchestrator | 2025-05-03 01:53:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:06.824364 | orchestrator | 2025-05-03 01:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:06.824500 | orchestrator | 2025-05-03 01:53:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:09.879265 | orchestrator | 2025-05-03 01:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:09.879412 | orchestrator | 2025-05-03 01:53:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:12.931908 | orchestrator | 2025-05-03 01:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:12.932051 | orchestrator | 2025-05-03 01:53:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:15.987060 | orchestrator | 2025-05-03 01:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:15.987262 | orchestrator | 2025-05-03 01:53:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:19.035416 | orchestrator | 2025-05-03 01:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:19.035553 | orchestrator | 2025-05-03 01:53:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:22.074680 | orchestrator | 2025-05-03 01:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:22.074828 | orchestrator | 2025-05-03 01:53:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:25.113615 | orchestrator | 2025-05-03 01:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:25.113764 | orchestrator | 2025-05-03 01:53:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:28.167699 | orchestrator | 2025-05-03 01:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:28.167840 | orchestrator | 2025-05-03 01:53:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:31.228386 | orchestrator | 2025-05-03 01:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:31.228588 | orchestrator | 2025-05-03 01:53:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:34.283909 | orchestrator | 2025-05-03 01:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:34.284110 | orchestrator | 2025-05-03 01:53:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:37.326120 | orchestrator | 2025-05-03 01:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:37.326258 | orchestrator | 2025-05-03 01:53:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:40.369969 | orchestrator | 2025-05-03 01:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:40.370246 | orchestrator | 2025-05-03 01:53:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:43.425917 | orchestrator | 2025-05-03 01:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:43.426158 | orchestrator | 2025-05-03 01:53:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:46.484880 | orchestrator | 2025-05-03 01:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:46.485030 | orchestrator | 2025-05-03 01:53:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:49.532190 | orchestrator | 2025-05-03 01:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:49.532330 | orchestrator | 2025-05-03 01:53:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:52.581861 | orchestrator | 2025-05-03 01:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:52.582006 | orchestrator | 2025-05-03 01:53:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:55.630372 | orchestrator | 2025-05-03 01:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:55.630511 | orchestrator | 2025-05-03 01:53:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:53:58.680641 | orchestrator | 2025-05-03 01:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:53:58.680784 | orchestrator | 2025-05-03 01:53:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:01.734165 | orchestrator | 2025-05-03 01:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:01.734306 | orchestrator | 2025-05-03 01:54:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:04.787181 | orchestrator | 2025-05-03 01:54:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:04.787315 | orchestrator | 2025-05-03 01:54:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:07.835654 | orchestrator | 2025-05-03 01:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:07.835812 | orchestrator | 2025-05-03 01:54:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:10.880429 | orchestrator | 2025-05-03 01:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:10.880586 | orchestrator | 2025-05-03 01:54:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:13.928051 | orchestrator | 2025-05-03 01:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:13.928234 | orchestrator | 2025-05-03 01:54:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:16.979345 | orchestrator | 2025-05-03 01:54:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:16.979488 | orchestrator | 2025-05-03 01:54:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:20.027003 | orchestrator | 2025-05-03 01:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:20.027213 | orchestrator | 2025-05-03 01:54:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:23.072230 | orchestrator | 2025-05-03 01:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:23.072379 | orchestrator | 2025-05-03 01:54:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:26.112338 | orchestrator | 2025-05-03 01:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:26.112449 | orchestrator | 2025-05-03 01:54:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:29.162421 | orchestrator | 2025-05-03 01:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:29.162581 | orchestrator | 2025-05-03 01:54:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:32.210677 | orchestrator | 2025-05-03 01:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:32.210824 | orchestrator | 2025-05-03 01:54:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:35.260861 | orchestrator | 2025-05-03 01:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:35.261006 | orchestrator | 2025-05-03 01:54:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:38.313317 | orchestrator | 2025-05-03 01:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:38.313451 | orchestrator | 2025-05-03 01:54:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:41.352947 | orchestrator | 2025-05-03 01:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:41.353132 | orchestrator | 2025-05-03 01:54:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:44.407058 | orchestrator | 2025-05-03 01:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:44.407238 | orchestrator | 2025-05-03 01:54:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:47.450518 | orchestrator | 2025-05-03 01:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:47.450661 | orchestrator | 2025-05-03 01:54:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:50.503984 | orchestrator | 2025-05-03 01:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:50.504179 | orchestrator | 2025-05-03 01:54:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:53.555289 | orchestrator | 2025-05-03 01:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:53.555435 | orchestrator | 2025-05-03 01:54:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:56.606386 | orchestrator | 2025-05-03 01:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:56.606524 | orchestrator | 2025-05-03 01:54:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:54:59.663665 | orchestrator | 2025-05-03 01:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:54:59.663826 | orchestrator | 2025-05-03 01:54:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:02.715586 | orchestrator | 2025-05-03 01:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:02.715742 | orchestrator | 2025-05-03 01:55:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:05.769849 | orchestrator | 2025-05-03 01:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:05.770069 | orchestrator | 2025-05-03 01:55:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:08.844013 | orchestrator | 2025-05-03 01:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:08.844197 | orchestrator | 2025-05-03 01:55:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:11.894621 | orchestrator | 2025-05-03 01:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:11.894768 | orchestrator | 2025-05-03 01:55:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:14.950761 | orchestrator | 2025-05-03 01:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:14.950904 | orchestrator | 2025-05-03 01:55:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:17.999956 | orchestrator | 2025-05-03 01:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:18.000208 | orchestrator | 2025-05-03 01:55:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:21.046341 | orchestrator | 2025-05-03 01:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:21.046445 | orchestrator | 2025-05-03 01:55:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:24.105083 | orchestrator | 2025-05-03 01:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:24.105258 | orchestrator | 2025-05-03 01:55:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:27.151431 | orchestrator | 2025-05-03 01:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:27.151570 | orchestrator | 2025-05-03 01:55:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:30.213955 | orchestrator | 2025-05-03 01:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:30.214214 | orchestrator | 2025-05-03 01:55:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:33.262537 | orchestrator | 2025-05-03 01:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:33.262677 | orchestrator | 2025-05-03 01:55:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:36.303410 | orchestrator | 2025-05-03 01:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:36.303557 | orchestrator | 2025-05-03 01:55:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:39.360912 | orchestrator | 2025-05-03 01:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:39.361056 | orchestrator | 2025-05-03 01:55:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:42.398925 | orchestrator | 2025-05-03 01:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:42.399068 | orchestrator | 2025-05-03 01:55:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:45.456394 | orchestrator | 2025-05-03 01:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:45.456529 | orchestrator | 2025-05-03 01:55:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:48.517478 | orchestrator | 2025-05-03 01:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:48.517614 | orchestrator | 2025-05-03 01:55:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:51.565554 | orchestrator | 2025-05-03 01:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:51.565663 | orchestrator | 2025-05-03 01:55:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:54.617195 | orchestrator | 2025-05-03 01:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:54.617379 | orchestrator | 2025-05-03 01:55:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:55:57.667529 | orchestrator | 2025-05-03 01:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:55:57.667669 | orchestrator | 2025-05-03 01:55:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:00.721224 | orchestrator | 2025-05-03 01:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:00.721366 | orchestrator | 2025-05-03 01:56:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:03.780513 | orchestrator | 2025-05-03 01:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:03.780648 | orchestrator | 2025-05-03 01:56:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:06.832455 | orchestrator | 2025-05-03 01:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:06.832603 | orchestrator | 2025-05-03 01:56:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:09.878977 | orchestrator | 2025-05-03 01:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:09.879166 | orchestrator | 2025-05-03 01:56:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:12.929779 | orchestrator | 2025-05-03 01:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:12.929920 | orchestrator | 2025-05-03 01:56:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:15.981401 | orchestrator | 2025-05-03 01:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:15.981544 | orchestrator | 2025-05-03 01:56:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:19.033417 | orchestrator | 2025-05-03 01:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:19.033557 | orchestrator | 2025-05-03 01:56:19 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:22.078838 | orchestrator | 2025-05-03 01:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:22.078932 | orchestrator | 2025-05-03 01:56:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:25.122974 | orchestrator | 2025-05-03 01:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:25.123184 | orchestrator | 2025-05-03 01:56:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:28.177348 | orchestrator | 2025-05-03 01:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:28.177485 | orchestrator | 2025-05-03 01:56:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:31.234104 | orchestrator | 2025-05-03 01:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:31.234286 | orchestrator | 2025-05-03 01:56:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:34.292326 | orchestrator | 2025-05-03 01:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:34.292501 | orchestrator | 2025-05-03 01:56:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:37.345566 | orchestrator | 2025-05-03 01:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:37.345710 | orchestrator | 2025-05-03 01:56:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:40.384982 | orchestrator | 2025-05-03 01:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:40.385110 | orchestrator | 2025-05-03 01:56:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:43.434789 | orchestrator | 2025-05-03 01:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:43.434910 | orchestrator | 2025-05-03 01:56:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:46.481351 | orchestrator | 2025-05-03 01:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:46.481507 | orchestrator | 2025-05-03 01:56:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:49.531426 | orchestrator | 2025-05-03 01:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:49.531570 | orchestrator | 2025-05-03 01:56:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:52.583952 | orchestrator | 2025-05-03 01:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:52.584089 | orchestrator | 2025-05-03 01:56:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:55.637261 | orchestrator | 2025-05-03 01:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:55.637408 | orchestrator | 2025-05-03 01:56:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:56:58.685906 | orchestrator | 2025-05-03 01:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:56:58.686205 | orchestrator | 2025-05-03 01:56:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:01.732804 | orchestrator | 2025-05-03 01:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:01.733033 | orchestrator | 2025-05-03 01:57:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:04.783361 | orchestrator | 2025-05-03 01:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:04.783531 | orchestrator | 2025-05-03 01:57:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:07.828824 | orchestrator | 2025-05-03 01:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:07.828997 | orchestrator | 2025-05-03 01:57:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:10.880518 | orchestrator | 2025-05-03 01:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:10.880696 | orchestrator | 2025-05-03 01:57:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:13.925989 | orchestrator | 2025-05-03 01:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:13.926233 | orchestrator | 2025-05-03 01:57:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:16.976742 | orchestrator | 2025-05-03 01:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:16.976933 | orchestrator | 2025-05-03 01:57:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:20.018869 | orchestrator | 2025-05-03 01:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:20.019009 | orchestrator | 2025-05-03 01:57:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:23.073846 | orchestrator | 2025-05-03 01:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:23.074070 | orchestrator | 2025-05-03 01:57:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:26.118071 | orchestrator | 2025-05-03 01:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:26.118260 | orchestrator | 2025-05-03 01:57:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:29.172043 | orchestrator | 2025-05-03 01:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:29.172265 | orchestrator | 2025-05-03 01:57:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:32.223606 | orchestrator | 2025-05-03 01:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:32.223773 | orchestrator | 2025-05-03 01:57:32 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:35.271495 | orchestrator | 2025-05-03 01:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:35.271663 | orchestrator | 2025-05-03 01:57:35 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:38.322288 | orchestrator | 2025-05-03 01:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:38.322495 | orchestrator | 2025-05-03 01:57:38 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:41.368804 | orchestrator | 2025-05-03 01:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:41.368982 | orchestrator | 2025-05-03 01:57:41 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:44.418239 | orchestrator | 2025-05-03 01:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:44.418405 | orchestrator | 2025-05-03 01:57:44 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:47.464355 | orchestrator | 2025-05-03 01:57:44 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:47.464504 | orchestrator | 2025-05-03 01:57:47 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:50.519498 | orchestrator | 2025-05-03 01:57:47 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:50.519663 | orchestrator | 2025-05-03 01:57:50 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:50.519951 | orchestrator | 2025-05-03 01:57:50 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:53.570887 | orchestrator | 2025-05-03 01:57:53 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:56.630219 | orchestrator | 2025-05-03 01:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:56.630384 | orchestrator | 2025-05-03 01:57:56 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:57:59.676400 | orchestrator | 2025-05-03 01:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:57:59.676562 | orchestrator | 2025-05-03 01:57:59 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:02.730979 | orchestrator | 2025-05-03 01:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:02.731172 | orchestrator | 2025-05-03 01:58:02 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:05.782915 | orchestrator | 2025-05-03 01:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:05.783060 | orchestrator | 2025-05-03 01:58:05 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:08.841859 | orchestrator | 2025-05-03 01:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:08.842006 | orchestrator | 2025-05-03 01:58:08 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:11.893491 | orchestrator | 2025-05-03 01:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:11.893637 | orchestrator | 2025-05-03 01:58:11 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:14.944283 | orchestrator | 2025-05-03 01:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:14.944428 | orchestrator | 2025-05-03 01:58:14 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:17.996259 | orchestrator | 2025-05-03 01:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:17.996439 | orchestrator | 2025-05-03 01:58:17 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:21.051228 | orchestrator | 2025-05-03 01:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:21.051418 | orchestrator | 2025-05-03 01:58:21 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:24.094379 | orchestrator | 2025-05-03 01:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:24.094531 | orchestrator | 2025-05-03 01:58:24 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:27.142372 | orchestrator | 2025-05-03 01:58:24 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:27.142547 | orchestrator | 2025-05-03 01:58:27 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:30.192711 | orchestrator | 2025-05-03 01:58:27 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:30.192860 | orchestrator | 2025-05-03 01:58:30 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:33.244995 | orchestrator | 2025-05-03 01:58:30 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:33.245130 | orchestrator | 2025-05-03 01:58:33 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:36.295415 | orchestrator | 2025-05-03 01:58:33 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:36.295560 | orchestrator | 2025-05-03 01:58:36 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:39.338259 | orchestrator | 2025-05-03 01:58:36 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:39.338400 | orchestrator | 2025-05-03 01:58:39 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:42.387320 | orchestrator | 2025-05-03 01:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:42.387473 | orchestrator | 2025-05-03 01:58:42 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:45.443950 | orchestrator | 2025-05-03 01:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:45.444085 | orchestrator | 2025-05-03 01:58:45 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:48.502348 | orchestrator | 2025-05-03 01:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:48.502487 | orchestrator | 2025-05-03 01:58:48 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:51.549675 | orchestrator | 2025-05-03 01:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:51.549766 | orchestrator | 2025-05-03 01:58:51 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:54.602428 | orchestrator | 2025-05-03 01:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:54.602615 | orchestrator | 2025-05-03 01:58:54 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:58:57.656243 | orchestrator | 2025-05-03 01:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:58:57.656421 | orchestrator | 2025-05-03 01:58:57 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:00.705106 | orchestrator | 2025-05-03 01:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:00.705355 | orchestrator | 2025-05-03 01:59:00 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:03.752362 | orchestrator | 2025-05-03 01:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:03.752492 | orchestrator | 2025-05-03 01:59:03 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:06.797705 | orchestrator | 2025-05-03 01:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:06.797876 | orchestrator | 2025-05-03 01:59:06 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:09.841970 | orchestrator | 2025-05-03 01:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:09.842247 | orchestrator | 2025-05-03 01:59:09 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:12.892533 | orchestrator | 2025-05-03 01:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:12.892675 | orchestrator | 2025-05-03 01:59:12 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:15.935593 | orchestrator | 2025-05-03 01:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:15.935699 | orchestrator | 2025-05-03 01:59:15 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:18.982083 | orchestrator | 2025-05-03 01:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:18.982262 | orchestrator | 2025-05-03 01:59:18 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:22.028637 | orchestrator | 2025-05-03 01:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:22.028775 | orchestrator | 2025-05-03 01:59:22 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:25.075252 | orchestrator | 2025-05-03 01:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:25.075440 | orchestrator | 2025-05-03 01:59:25 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:28.130344 | orchestrator | 2025-05-03 01:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:28.130537 | orchestrator | 2025-05-03 01:59:28 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:31.183441 | orchestrator | 2025-05-03 01:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:31.183591 | orchestrator | 2025-05-03 01:59:31 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:34.232048 | orchestrator | 2025-05-03 01:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:34.232273 | orchestrator | 2025-05-03 01:59:34 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:37.293583 | orchestrator | 2025-05-03 01:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:37.293759 | orchestrator | 2025-05-03 01:59:37 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:40.344869 | orchestrator | 2025-05-03 01:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:40.345049 | orchestrator | 2025-05-03 01:59:40 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:43.398214 | orchestrator | 2025-05-03 01:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:43.398393 | orchestrator | 2025-05-03 01:59:43 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:46.457341 | orchestrator | 2025-05-03 01:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:46.457520 | orchestrator | 2025-05-03 01:59:46 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:49.509617 | orchestrator | 2025-05-03 01:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:49.509797 | orchestrator | 2025-05-03 01:59:49 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:52.563238 | orchestrator | 2025-05-03 01:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:52.563412 | orchestrator | 2025-05-03 01:59:52 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:55.607241 | orchestrator | 2025-05-03 01:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:55.607436 | orchestrator | 2025-05-03 01:59:55 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 01:59:58.657924 | orchestrator | 2025-05-03 01:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-03 01:59:58.658225 | orchestrator | 2025-05-03 01:59:58 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:01.705500 | orchestrator | 2025-05-03 01:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:01.705697 | orchestrator | 2025-05-03 02:00:01 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:04.754734 | orchestrator | 2025-05-03 02:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:04.754911 | orchestrator | 2025-05-03 02:00:04 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:07.812332 | orchestrator | 2025-05-03 02:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:07.812503 | orchestrator | 2025-05-03 02:00:07 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:10.862316 | orchestrator | 2025-05-03 02:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:10.862459 | orchestrator | 2025-05-03 02:00:10 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:13.921117 | orchestrator | 2025-05-03 02:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:13.921308 | orchestrator | 2025-05-03 02:00:13 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:16.963400 | orchestrator | 2025-05-03 02:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:16.963535 | orchestrator | 2025-05-03 02:00:16 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:20.025243 | orchestrator | 2025-05-03 02:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:20.025410 | orchestrator | 2025-05-03 02:00:20 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:23.072778 | orchestrator | 2025-05-03 02:00:20 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:23.072956 | orchestrator | 2025-05-03 02:00:23 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:26.111403 | orchestrator | 2025-05-03 02:00:23 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:26.111545 | orchestrator | 2025-05-03 02:00:26 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:29.161291 | orchestrator | 2025-05-03 02:00:26 | INFO  | Wait 1 second(s) until the next check 2025-05-03 02:00:29.161434 | orchestrator | 2025-05-03 02:00:29 | INFO  | Task 48a7cfec-8936-4280-adce-1507df83d421 is in state STARTED 2025-05-03 02:00:31.598645 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-03 02:00:31.603435 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-03 02:00:32.317169 | 2025-05-03 02:00:32.317353 | PLAY [Post output play] 2025-05-03 02:00:32.348307 | 2025-05-03 02:00:32.348491 | LOOP [stage-output : Register sources] 2025-05-03 02:00:32.425244 | 2025-05-03 02:00:32.425483 | TASK [stage-output : Check sudo] 2025-05-03 02:00:33.143672 | orchestrator | sudo: a password is required 2025-05-03 02:00:33.468788 | orchestrator | ok: Runtime: 0:00:00.014733 2025-05-03 02:00:33.485430 | 2025-05-03 02:00:33.485565 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-03 02:00:33.525810 | 2025-05-03 02:00:33.526090 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-03 02:00:33.618608 | orchestrator | ok 2025-05-03 02:00:33.629510 | 2025-05-03 02:00:33.629641 | LOOP [stage-output : Ensure target folders exist] 2025-05-03 02:00:34.083317 | orchestrator | ok: "docs" 2025-05-03 02:00:34.083720 | 2025-05-03 02:00:34.333193 | orchestrator | ok: "artifacts" 2025-05-03 02:00:34.567856 | orchestrator | ok: "logs" 2025-05-03 02:00:34.600863 | 2025-05-03 02:00:34.601144 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-03 02:00:34.647021 | 2025-05-03 02:00:34.647359 | TASK [stage-output : Make all log files readable] 2025-05-03 02:00:34.955418 | orchestrator | ok 2025-05-03 02:00:34.967781 | 2025-05-03 02:00:34.967965 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-03 02:00:35.025729 | orchestrator | skipping: Conditional result was False 2025-05-03 02:00:35.044637 | 2025-05-03 02:00:35.044970 | TASK [stage-output : Discover log files for compression] 2025-05-03 02:00:35.076090 | orchestrator | skipping: Conditional result was False 2025-05-03 02:00:35.094059 | 2025-05-03 02:00:35.094248 | LOOP [stage-output : Archive everything from logs] 2025-05-03 02:00:35.189892 | 2025-05-03 02:00:35.190274 | PLAY [Post cleanup play] 2025-05-03 02:00:35.220504 | 2025-05-03 02:00:35.220693 | TASK [Set cloud fact (Zuul deployment)] 2025-05-03 02:00:35.288692 | orchestrator | ok 2025-05-03 02:00:35.299808 | 2025-05-03 02:00:35.299970 | TASK [Set cloud fact (local deployment)] 2025-05-03 02:00:35.336793 | orchestrator | skipping: Conditional result was False 2025-05-03 02:00:35.356616 | 2025-05-03 02:00:35.356848 | TASK [Clean the cloud environment] 2025-05-03 02:00:36.002577 | orchestrator | 2025-05-03 02:00:36 - clean up servers 2025-05-03 02:00:36.999376 | orchestrator | 2025-05-03 02:00:36 - testbed-manager 2025-05-03 02:00:38.100366 | orchestrator | 2025-05-03 02:00:38 - testbed-node-3 2025-05-03 02:00:38.202830 | orchestrator | 2025-05-03 02:00:38 - testbed-node-5 2025-05-03 02:00:38.297771 | orchestrator | 2025-05-03 02:00:38 - testbed-node-0 2025-05-03 02:00:38.421081 | orchestrator | 2025-05-03 02:00:38 - testbed-node-1 2025-05-03 02:00:38.538208 | orchestrator | 2025-05-03 02:00:38 - testbed-node-2 2025-05-03 02:00:38.639022 | orchestrator | 2025-05-03 02:00:38 - testbed-node-4 2025-05-03 02:00:38.770149 | orchestrator | 2025-05-03 02:00:38 - clean up keypairs 2025-05-03 02:00:38.790300 | orchestrator | 2025-05-03 02:00:38 - testbed 2025-05-03 02:00:38.816530 | orchestrator | 2025-05-03 02:00:38 - wait for servers to be gone 2025-05-03 02:00:45.703980 | orchestrator | 2025-05-03 02:00:45 - clean up ports 2025-05-03 02:00:45.913845 | orchestrator | 2025-05-03 02:00:45 - 30651cef-084b-42af-abc8-09d3b4fab1d9 2025-05-03 02:00:46.196925 | orchestrator | 2025-05-03 02:00:46 - 3d0b3a5b-db6f-412b-946c-73733a276f3d 2025-05-03 02:00:46.389315 | orchestrator | 2025-05-03 02:00:46 - 3fbe2154-7fe2-499f-b5d3-d6c4070148af 2025-05-03 02:00:46.582906 | orchestrator | 2025-05-03 02:00:46 - 62a35bd7-ea01-41c4-9ec5-c7cbc4196390 2025-05-03 02:00:46.824079 | orchestrator | 2025-05-03 02:00:46 - 91955358-e201-4b2c-8c8b-2e6509b6713b 2025-05-03 02:00:47.016626 | orchestrator | 2025-05-03 02:00:47 - 96c9e881-3660-4dc4-a3ec-53eb6058a993 2025-05-03 02:00:47.384382 | orchestrator | 2025-05-03 02:00:47 - e3c5250f-8547-4ad5-95f5-3a0de9f7a42d 2025-05-03 02:00:47.599791 | orchestrator | 2025-05-03 02:00:47 - clean up volumes 2025-05-03 02:00:47.752615 | orchestrator | 2025-05-03 02:00:47 - testbed-volume-2-node-base 2025-05-03 02:00:47.787404 | orchestrator | 2025-05-03 02:00:47 - testbed-volume-1-node-base 2025-05-03 02:00:47.826528 | orchestrator | 2025-05-03 02:00:47 - testbed-volume-3-node-base 2025-05-03 02:00:47.872082 | orchestrator | 2025-05-03 02:00:47 - testbed-volume-4-node-base 2025-05-03 02:00:47.909765 | orchestrator | 2025-05-03 02:00:47 - testbed-volume-5-node-base 2025-05-03 02:00:47.952488 | orchestrator | 2025-05-03 02:00:47 - testbed-volume-0-node-base 2025-05-03 02:00:47.997705 | orchestrator | 2025-05-03 02:00:47 - testbed-volume-9-node-3 2025-05-03 02:00:48.040390 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-6-node-0 2025-05-03 02:00:48.083905 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-manager-base 2025-05-03 02:00:48.135499 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-10-node-4 2025-05-03 02:00:48.181009 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-3-node-3 2025-05-03 02:00:48.226425 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-13-node-1 2025-05-03 02:00:48.285209 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-11-node-5 2025-05-03 02:00:48.332729 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-12-node-0 2025-05-03 02:00:48.376637 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-7-node-1 2025-05-03 02:00:48.424141 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-17-node-5 2025-05-03 02:00:48.463087 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-0-node-0 2025-05-03 02:00:48.509710 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-2-node-2 2025-05-03 02:00:48.549756 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-1-node-1 2025-05-03 02:00:48.593426 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-4-node-4 2025-05-03 02:00:48.638997 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-16-node-4 2025-05-03 02:00:48.678901 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-5-node-5 2025-05-03 02:00:48.719469 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-15-node-3 2025-05-03 02:00:48.760986 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-8-node-2 2025-05-03 02:00:48.800608 | orchestrator | 2025-05-03 02:00:48 - testbed-volume-14-node-2 2025-05-03 02:00:48.842907 | orchestrator | 2025-05-03 02:00:48 - disconnect routers 2025-05-03 02:00:49.763621 | orchestrator | 2025-05-03 02:00:49 - testbed 2025-05-03 02:00:50.457494 | orchestrator | 2025-05-03 02:00:50 - clean up subnets 2025-05-03 02:00:50.488098 | orchestrator | 2025-05-03 02:00:50 - subnet-testbed-management 2025-05-03 02:00:50.604324 | orchestrator | 2025-05-03 02:00:50 - clean up networks 2025-05-03 02:00:50.814484 | orchestrator | 2025-05-03 02:00:50 - net-testbed-management 2025-05-03 02:00:51.056617 | orchestrator | 2025-05-03 02:00:51 - clean up security groups 2025-05-03 02:00:51.090127 | orchestrator | 2025-05-03 02:00:51 - testbed-management 2025-05-03 02:00:51.183231 | orchestrator | 2025-05-03 02:00:51 - testbed-node 2025-05-03 02:00:51.262831 | orchestrator | 2025-05-03 02:00:51 - clean up floating ips 2025-05-03 02:00:51.295899 | orchestrator | 2025-05-03 02:00:51 - 81.163.192.136 2025-05-03 02:00:51.683287 | orchestrator | 2025-05-03 02:00:51 - clean up routers 2025-05-03 02:00:51.763132 | orchestrator | 2025-05-03 02:00:51 - testbed 2025-05-03 02:00:52.469183 | orchestrator | changed 2025-05-03 02:00:52.511668 | 2025-05-03 02:00:52.511779 | PLAY RECAP 2025-05-03 02:00:52.511836 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-03 02:00:52.511862 | 2025-05-03 02:00:52.638071 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-03 02:00:52.646144 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-03 02:00:53.382096 | 2025-05-03 02:00:53.382329 | PLAY [Base post-fetch] 2025-05-03 02:00:53.427360 | 2025-05-03 02:00:53.427529 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-03 02:00:53.494722 | orchestrator | skipping: Conditional result was False 2025-05-03 02:00:53.509956 | 2025-05-03 02:00:53.510226 | TASK [fetch-output : Set log path for single node] 2025-05-03 02:00:53.585314 | orchestrator | ok 2025-05-03 02:00:53.595442 | 2025-05-03 02:00:53.595577 | LOOP [fetch-output : Ensure local output dirs] 2025-05-03 02:00:54.083803 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/b5e89fbdb2b248eda6b44d358a1c2c68/work/logs" 2025-05-03 02:00:54.381788 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b5e89fbdb2b248eda6b44d358a1c2c68/work/artifacts" 2025-05-03 02:00:54.648689 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b5e89fbdb2b248eda6b44d358a1c2c68/work/docs" 2025-05-03 02:00:54.671211 | 2025-05-03 02:00:54.671371 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-03 02:00:55.509204 | orchestrator | changed: .d..t...... ./ 2025-05-03 02:00:55.509612 | orchestrator | changed: All items complete 2025-05-03 02:00:55.509675 | 2025-05-03 02:00:56.131707 | orchestrator | changed: .d..t...... ./ 2025-05-03 02:00:56.727879 | orchestrator | changed: .d..t...... ./ 2025-05-03 02:00:56.753845 | 2025-05-03 02:00:56.753986 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-03 02:00:56.799382 | orchestrator | skipping: Conditional result was False 2025-05-03 02:00:56.807693 | orchestrator | skipping: Conditional result was False 2025-05-03 02:00:56.854970 | 2025-05-03 02:00:56.855084 | PLAY RECAP 2025-05-03 02:00:56.855144 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-03 02:00:56.855172 | 2025-05-03 02:00:56.986540 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-03 02:00:56.989944 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-03 02:00:57.686860 | 2025-05-03 02:00:57.687101 | PLAY [Base post] 2025-05-03 02:00:57.717620 | 2025-05-03 02:00:57.717783 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-03 02:00:58.713972 | orchestrator | changed 2025-05-03 02:00:58.754229 | 2025-05-03 02:00:58.754429 | PLAY RECAP 2025-05-03 02:00:58.754497 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-03 02:00:58.754568 | 2025-05-03 02:00:58.883498 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-03 02:00:58.886825 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-03 02:00:59.682890 | 2025-05-03 02:00:59.683196 | PLAY [Base post-logs] 2025-05-03 02:00:59.699861 | 2025-05-03 02:00:59.700002 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-03 02:01:00.182748 | localhost | changed 2025-05-03 02:01:00.189969 | 2025-05-03 02:01:00.190223 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-03 02:01:00.233694 | localhost | ok 2025-05-03 02:01:00.244483 | 2025-05-03 02:01:00.244636 | TASK [Set zuul-log-path fact] 2025-05-03 02:01:00.267795 | localhost | ok 2025-05-03 02:01:00.282752 | 2025-05-03 02:01:00.282887 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-03 02:01:00.312011 | localhost | ok 2025-05-03 02:01:00.318915 | 2025-05-03 02:01:00.319027 | TASK [upload-logs : Create log directories] 2025-05-03 02:01:00.865196 | localhost | changed 2025-05-03 02:01:00.871369 | 2025-05-03 02:01:00.871497 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-03 02:01:01.411601 | localhost -> localhost | ok: Runtime: 0:00:00.005595 2025-05-03 02:01:01.425038 | 2025-05-03 02:01:01.425273 | TASK [upload-logs : Upload logs to log server] 2025-05-03 02:01:02.030751 | localhost | Output suppressed because no_log was given 2025-05-03 02:01:02.036327 | 2025-05-03 02:01:02.036580 | LOOP [upload-logs : Compress console log and json output] 2025-05-03 02:01:02.112869 | localhost | skipping: Conditional result was False 2025-05-03 02:01:02.130037 | localhost | skipping: Conditional result was False 2025-05-03 02:01:02.144798 | 2025-05-03 02:01:02.145012 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-03 02:01:02.210151 | localhost | skipping: Conditional result was False 2025-05-03 02:01:02.210825 | 2025-05-03 02:01:02.222792 | localhost | skipping: Conditional result was False 2025-05-03 02:01:02.242176 | 2025-05-03 02:01:02.242387 | LOOP [upload-logs : Upload console log and json output]